00:00:00.001 Started by upstream project "autotest-per-patch" build number 120656 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.091 Fetching changes from the remote Git repository 00:00:00.092 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.139 Using shallow fetch with depth 1 00:00:00.139 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.139 > git --version # timeout=10 00:00:00.176 > git --version # 'git version 2.39.2' 00:00:00.176 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.178 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.178 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.162 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.174 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.186 Checking out Revision a704ed4d86859cb8cbec080c78b138476da6ee34 (FETCH_HEAD) 00:00:04.186 > git config core.sparsecheckout # timeout=10 00:00:04.198 > git read-tree -mu HEAD # timeout=10 00:00:04.216 > git checkout -f a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=5 00:00:04.235 Commit message: "packer: Insert post-processors only if at least one is defined" 00:00:04.235 > git rev-list --no-walk a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=10 00:00:04.314 [Pipeline] Start of Pipeline 00:00:04.330 [Pipeline] library 00:00:04.332 Loading library shm_lib@master 00:00:04.332 Library shm_lib@master is cached. Copying from home. 00:00:04.350 [Pipeline] node 00:00:04.372 Running on WFP37 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:04.373 [Pipeline] { 00:00:04.384 [Pipeline] catchError 00:00:04.385 [Pipeline] { 00:00:04.401 [Pipeline] wrap 00:00:04.408 [Pipeline] { 00:00:04.412 [Pipeline] stage 00:00:04.413 [Pipeline] { (Prologue) 00:00:04.558 [Pipeline] sh 00:00:05.347 + logger -p user.info -t JENKINS-CI 00:00:05.369 [Pipeline] echo 00:00:05.371 Node: WFP37 00:00:05.379 [Pipeline] sh 00:00:05.728 [Pipeline] setCustomBuildProperty 00:00:05.741 [Pipeline] echo 00:00:05.743 Cleanup processes 00:00:05.749 [Pipeline] sh 00:00:06.041 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.041 4747 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.054 [Pipeline] sh 00:00:06.342 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.342 ++ grep -v 'sudo pgrep' 00:00:06.342 ++ awk '{print $1}' 00:00:06.342 + sudo kill -9 00:00:06.342 + true 00:00:06.356 [Pipeline] cleanWs 00:00:06.365 [WS-CLEANUP] Deleting project workspace... 00:00:06.365 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.377 [WS-CLEANUP] done 00:00:06.380 [Pipeline] setCustomBuildProperty 00:00:06.392 [Pipeline] sh 00:00:06.694 + sudo git config --global --replace-all safe.directory '*' 00:00:06.778 [Pipeline] nodesByLabel 00:00:06.780 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.789 [Pipeline] httpRequest 00:00:07.032 HttpMethod: GET 00:00:07.033 URL: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:07.740 Sending request to url: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:07.987 Response Code: HTTP/1.1 200 OK 00:00:08.061 Success: Status code 200 is in the accepted range: 200,404 00:00:08.062 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:10.903 [Pipeline] sh 00:00:11.197 + tar --no-same-owner -xf jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:11.217 [Pipeline] httpRequest 00:00:11.223 HttpMethod: GET 00:00:11.224 URL: http://10.211.164.101/packages/spdk_77a84e60e073c769797deff624cc274e83a9e621.tar.gz 00:00:11.226 Sending request to url: http://10.211.164.101/packages/spdk_77a84e60e073c769797deff624cc274e83a9e621.tar.gz 00:00:11.229 Response Code: HTTP/1.1 200 OK 00:00:11.230 Success: Status code 200 is in the accepted range: 200,404 00:00:11.231 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_77a84e60e073c769797deff624cc274e83a9e621.tar.gz 00:00:34.012 [Pipeline] sh 00:00:34.301 + tar --no-same-owner -xf spdk_77a84e60e073c769797deff624cc274e83a9e621.tar.gz 00:00:36.856 [Pipeline] sh 00:00:37.145 + git -C spdk log --oneline -n5 00:00:37.145 77a84e60e nvmf/tcp: add nvmf_qpair_set_ctrlr helper function 00:00:37.145 2731ac8c5 app/trace: emit owner descriptions 00:00:37.145 c064dc584 trace: rename trace_event's poller_id to owner_id 00:00:37.145 23f700383 trace: add concept of "owner" to trace files 00:00:37.145 67f328f92 trace: rename "per_lcore_history" to just "data" 00:00:37.158 [Pipeline] } 00:00:37.177 [Pipeline] // stage 00:00:37.184 [Pipeline] stage 00:00:37.186 [Pipeline] { (Prepare) 00:00:37.205 [Pipeline] writeFile 00:00:37.222 [Pipeline] sh 00:00:37.504 + logger -p user.info -t JENKINS-CI 00:00:37.517 [Pipeline] sh 00:00:37.803 + logger -p user.info -t JENKINS-CI 00:00:37.816 [Pipeline] sh 00:00:38.102 + cat autorun-spdk.conf 00:00:38.102 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.102 SPDK_TEST_NVMF=1 00:00:38.102 SPDK_TEST_NVME_CLI=1 00:00:38.102 SPDK_TEST_NVMF_NICS=mlx5 00:00:38.102 SPDK_RUN_UBSAN=1 00:00:38.102 NET_TYPE=phy 00:00:38.111 RUN_NIGHTLY=0 00:00:38.115 [Pipeline] readFile 00:00:38.158 [Pipeline] withEnv 00:00:38.160 [Pipeline] { 00:00:38.172 [Pipeline] sh 00:00:38.458 + set -ex 00:00:38.458 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:38.458 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:38.458 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.458 ++ SPDK_TEST_NVMF=1 00:00:38.458 ++ SPDK_TEST_NVME_CLI=1 00:00:38.458 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:38.458 ++ SPDK_RUN_UBSAN=1 00:00:38.458 ++ NET_TYPE=phy 00:00:38.458 ++ RUN_NIGHTLY=0 00:00:38.458 + case $SPDK_TEST_NVMF_NICS in 00:00:38.458 + DRIVERS=mlx5_ib 00:00:38.458 + [[ -n mlx5_ib ]] 00:00:38.458 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:38.458 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:45.043 rmmod: ERROR: Module irdma is not currently loaded 00:00:45.043 rmmod: ERROR: Module i40iw is not currently loaded 00:00:45.043 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:45.043 + true 00:00:45.043 + for D in $DRIVERS 00:00:45.043 + sudo modprobe mlx5_ib 00:00:45.043 + exit 0 00:00:45.054 [Pipeline] } 00:00:45.071 [Pipeline] // withEnv 00:00:45.076 [Pipeline] } 00:00:45.092 [Pipeline] // stage 00:00:45.101 [Pipeline] catchError 00:00:45.102 [Pipeline] { 00:00:45.115 [Pipeline] timeout 00:00:45.116 Timeout set to expire in 40 min 00:00:45.117 [Pipeline] { 00:00:45.132 [Pipeline] stage 00:00:45.134 [Pipeline] { (Tests) 00:00:45.149 [Pipeline] sh 00:00:45.439 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:45.439 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:45.439 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:45.439 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:45.439 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:45.439 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:45.439 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:45.439 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:45.439 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:45.439 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:45.439 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:45.439 + source /etc/os-release 00:00:45.439 ++ NAME='Fedora Linux' 00:00:45.439 ++ VERSION='38 (Cloud Edition)' 00:00:45.439 ++ ID=fedora 00:00:45.439 ++ VERSION_ID=38 00:00:45.439 ++ VERSION_CODENAME= 00:00:45.439 ++ PLATFORM_ID=platform:f38 00:00:45.439 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:45.439 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:45.439 ++ LOGO=fedora-logo-icon 00:00:45.439 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:45.439 ++ HOME_URL=https://fedoraproject.org/ 00:00:45.439 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:45.439 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:45.439 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:45.439 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:45.439 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:45.439 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:45.439 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:45.439 ++ SUPPORT_END=2024-05-14 00:00:45.439 ++ VARIANT='Cloud Edition' 00:00:45.439 ++ VARIANT_ID=cloud 00:00:45.439 + uname -a 00:00:45.439 Linux spdk-wfp-37 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:45.439 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:00:47.983 Hugepages 00:00:47.983 node hugesize free / total 00:00:47.983 node0 1048576kB 0 / 0 00:00:47.983 node0 2048kB 0 / 0 00:00:47.983 node1 1048576kB 0 / 0 00:00:47.983 node1 2048kB 0 / 0 00:00:47.983 00:00:47.983 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:47.983 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:47.983 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:47.983 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:47.983 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:47.983 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:47.983 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:47.983 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:47.983 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:47.983 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:47.983 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:47.983 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:47.983 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:47.983 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:47.983 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:47.983 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:47.983 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:47.983 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:47.983 + rm -f /tmp/spdk-ld-path 00:00:47.983 + source autorun-spdk.conf 00:00:47.983 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.983 ++ SPDK_TEST_NVMF=1 00:00:47.983 ++ SPDK_TEST_NVME_CLI=1 00:00:47.983 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:47.983 ++ SPDK_RUN_UBSAN=1 00:00:47.983 ++ NET_TYPE=phy 00:00:47.983 ++ RUN_NIGHTLY=0 00:00:47.983 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:47.983 + [[ -n '' ]] 00:00:47.983 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:47.983 + for M in /var/spdk/build-*-manifest.txt 00:00:47.983 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:47.983 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:47.983 + for M in /var/spdk/build-*-manifest.txt 00:00:47.983 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:47.983 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:47.983 ++ uname 00:00:47.983 + [[ Linux == \L\i\n\u\x ]] 00:00:47.983 + sudo dmesg -T 00:00:47.983 + sudo dmesg --clear 00:00:47.983 + dmesg_pid=5678 00:00:47.983 + [[ Fedora Linux == FreeBSD ]] 00:00:47.983 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:47.983 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:47.983 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:47.983 + sudo dmesg -Tw 00:00:47.983 + [[ -x /usr/src/fio-static/fio ]] 00:00:47.983 + export FIO_BIN=/usr/src/fio-static/fio 00:00:47.983 + FIO_BIN=/usr/src/fio-static/fio 00:00:47.983 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:47.983 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:47.983 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:47.983 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:47.983 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:47.983 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:47.983 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:47.983 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:47.983 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:47.983 Test configuration: 00:00:47.983 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.983 SPDK_TEST_NVMF=1 00:00:47.983 SPDK_TEST_NVME_CLI=1 00:00:47.983 SPDK_TEST_NVMF_NICS=mlx5 00:00:47.983 SPDK_RUN_UBSAN=1 00:00:47.983 NET_TYPE=phy 00:00:48.244 RUN_NIGHTLY=0 03:52:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:00:48.244 03:52:02 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:48.244 03:52:02 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:48.244 03:52:02 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:48.244 03:52:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.244 03:52:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.244 03:52:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.244 03:52:02 -- paths/export.sh@5 -- $ export PATH 00:00:48.244 03:52:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.244 03:52:02 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:00:48.244 03:52:02 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:48.244 03:52:02 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713491522.XXXXXX 00:00:48.244 03:52:02 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713491522.wytLdn 00:00:48.244 03:52:02 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:48.244 03:52:02 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:48.244 03:52:02 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:00:48.244 03:52:02 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:48.244 03:52:02 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:48.244 03:52:02 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:48.244 03:52:02 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:48.245 03:52:02 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.245 03:52:02 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:48.245 03:52:02 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:48.245 03:52:02 -- pm/common@17 -- $ local monitor 00:00:48.245 03:52:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.245 03:52:02 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5712 00:00:48.245 03:52:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.245 03:52:02 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5714 00:00:48.245 03:52:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.245 03:52:02 -- pm/common@21 -- $ date +%s 00:00:48.245 03:52:02 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5716 00:00:48.245 03:52:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.245 03:52:02 -- pm/common@21 -- $ date +%s 00:00:48.245 03:52:02 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5719 00:00:48.245 03:52:02 -- pm/common@26 -- $ sleep 1 00:00:48.245 03:52:02 -- pm/common@21 -- $ date +%s 00:00:48.245 03:52:02 -- pm/common@21 -- $ date +%s 00:00:48.245 03:52:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713491522 00:00:48.245 03:52:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713491522 00:00:48.245 03:52:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713491522 00:00:48.245 03:52:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713491522 00:00:48.245 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713491522_collect-vmstat.pm.log 00:00:48.245 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713491522_collect-cpu-load.pm.log 00:00:48.245 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713491522_collect-bmc-pm.bmc.pm.log 00:00:48.245 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713491522_collect-cpu-temp.pm.log 00:00:49.188 03:52:03 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:49.188 03:52:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.188 03:52:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.188 03:52:03 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:49.188 03:52:03 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.188 Fri Apr 19 01:52:03 AM UTC 2024 00:00:49.188 03:52:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.188 v24.05-pre-415-g77a84e60e 00:00:49.188 03:52:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.188 03:52:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.188 03:52:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.188 03:52:03 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:49.188 03:52:03 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:49.188 03:52:03 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.448 ************************************ 00:00:49.448 START TEST ubsan 00:00:49.448 ************************************ 00:00:49.448 03:52:03 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:49.448 using ubsan 00:00:49.448 00:00:49.448 real 0m0.000s 00:00:49.448 user 0m0.000s 00:00:49.448 sys 0m0.000s 00:00:49.448 03:52:03 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:49.448 03:52:03 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.448 ************************************ 00:00:49.448 END TEST ubsan 00:00:49.448 ************************************ 00:00:49.448 03:52:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:49.448 03:52:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:49.448 03:52:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:49.448 03:52:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:49.448 03:52:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:49.448 03:52:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:49.448 03:52:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:49.448 03:52:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:49.448 03:52:03 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:49.448 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:00:49.448 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:00:50.827 Using 'verbs' RDMA provider 00:01:06.296 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:16.284 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:16.284 Creating mk/config.mk...done. 00:01:16.284 Creating mk/cc.flags.mk...done. 00:01:16.284 Type 'make' to build. 00:01:16.284 03:52:30 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:16.284 03:52:30 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:16.284 03:52:30 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:16.284 03:52:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.284 ************************************ 00:01:16.284 START TEST make 00:01:16.284 ************************************ 00:01:16.284 03:52:30 -- common/autotest_common.sh@1111 -- $ make -j112 00:01:16.853 make[1]: Nothing to be done for 'all'. 00:01:24.985 The Meson build system 00:01:24.985 Version: 1.3.1 00:01:24.985 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:24.985 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:24.985 Build type: native build 00:01:24.985 Program cat found: YES (/usr/bin/cat) 00:01:24.985 Project name: DPDK 00:01:24.985 Project version: 23.11.0 00:01:24.985 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:24.985 C linker for the host machine: cc ld.bfd 2.39-16 00:01:24.985 Host machine cpu family: x86_64 00:01:24.985 Host machine cpu: x86_64 00:01:24.985 Message: ## Building in Developer Mode ## 00:01:24.985 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:24.985 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:24.986 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:24.986 Program python3 found: YES (/usr/bin/python3) 00:01:24.986 Program cat found: YES (/usr/bin/cat) 00:01:24.986 Compiler for C supports arguments -march=native: YES 00:01:24.986 Checking for size of "void *" : 8 00:01:24.986 Checking for size of "void *" : 8 (cached) 00:01:24.986 Library m found: YES 00:01:24.986 Library numa found: YES 00:01:24.986 Has header "numaif.h" : YES 00:01:24.986 Library fdt found: NO 00:01:24.986 Library execinfo found: NO 00:01:24.986 Has header "execinfo.h" : YES 00:01:24.986 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:24.986 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:24.986 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:24.986 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:24.986 Run-time dependency openssl found: YES 3.0.9 00:01:24.986 Run-time dependency libpcap found: YES 1.10.4 00:01:24.986 Has header "pcap.h" with dependency libpcap: YES 00:01:24.986 Compiler for C supports arguments -Wcast-qual: YES 00:01:24.986 Compiler for C supports arguments -Wdeprecated: YES 00:01:24.986 Compiler for C supports arguments -Wformat: YES 00:01:24.986 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:24.986 Compiler for C supports arguments -Wformat-security: NO 00:01:24.986 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.986 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:24.986 Compiler for C supports arguments -Wnested-externs: YES 00:01:24.986 Compiler for C supports arguments -Wold-style-definition: YES 00:01:24.986 Compiler for C supports arguments -Wpointer-arith: YES 00:01:24.986 Compiler for C supports arguments -Wsign-compare: YES 00:01:24.986 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:24.986 Compiler for C supports arguments -Wundef: YES 00:01:24.986 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.986 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:24.986 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:24.986 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.986 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:24.986 Program objdump found: YES (/usr/bin/objdump) 00:01:24.986 Compiler for C supports arguments -mavx512f: YES 00:01:24.986 Checking if "AVX512 checking" compiles: YES 00:01:24.986 Fetching value of define "__SSE4_2__" : 1 00:01:24.986 Fetching value of define "__AES__" : 1 00:01:24.986 Fetching value of define "__AVX__" : 1 00:01:24.986 Fetching value of define "__AVX2__" : 1 00:01:24.986 Fetching value of define "__AVX512BW__" : 1 00:01:24.986 Fetching value of define "__AVX512CD__" : 1 00:01:24.986 Fetching value of define "__AVX512DQ__" : 1 00:01:24.986 Fetching value of define "__AVX512F__" : 1 00:01:24.986 Fetching value of define "__AVX512VL__" : 1 00:01:24.986 Fetching value of define "__PCLMUL__" : 1 00:01:24.986 Fetching value of define "__RDRND__" : 1 00:01:24.986 Fetching value of define "__RDSEED__" : 1 00:01:24.986 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:24.986 Fetching value of define "__znver1__" : (undefined) 00:01:24.986 Fetching value of define "__znver2__" : (undefined) 00:01:24.986 Fetching value of define "__znver3__" : (undefined) 00:01:24.986 Fetching value of define "__znver4__" : (undefined) 00:01:24.986 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:24.986 Message: lib/log: Defining dependency "log" 00:01:24.986 Message: lib/kvargs: Defining dependency "kvargs" 00:01:24.986 Message: lib/telemetry: Defining dependency "telemetry" 00:01:24.986 Checking for function "getentropy" : NO 00:01:24.986 Message: lib/eal: Defining dependency "eal" 00:01:24.986 Message: lib/ring: Defining dependency "ring" 00:01:24.986 Message: lib/rcu: Defining dependency "rcu" 00:01:24.986 Message: lib/mempool: Defining dependency "mempool" 00:01:24.986 Message: lib/mbuf: Defining dependency "mbuf" 00:01:24.986 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:24.986 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:24.986 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:24.986 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:24.986 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:24.986 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:24.986 Compiler for C supports arguments -mpclmul: YES 00:01:24.986 Compiler for C supports arguments -maes: YES 00:01:24.986 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:24.986 Compiler for C supports arguments -mavx512bw: YES 00:01:24.986 Compiler for C supports arguments -mavx512dq: YES 00:01:24.986 Compiler for C supports arguments -mavx512vl: YES 00:01:24.986 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:24.986 Compiler for C supports arguments -mavx2: YES 00:01:24.986 Compiler for C supports arguments -mavx: YES 00:01:24.986 Message: lib/net: Defining dependency "net" 00:01:24.986 Message: lib/meter: Defining dependency "meter" 00:01:24.986 Message: lib/ethdev: Defining dependency "ethdev" 00:01:24.986 Message: lib/pci: Defining dependency "pci" 00:01:24.986 Message: lib/cmdline: Defining dependency "cmdline" 00:01:24.986 Message: lib/hash: Defining dependency "hash" 00:01:24.986 Message: lib/timer: Defining dependency "timer" 00:01:24.986 Message: lib/compressdev: Defining dependency "compressdev" 00:01:24.986 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:24.986 Message: lib/dmadev: Defining dependency "dmadev" 00:01:24.986 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:24.986 Message: lib/power: Defining dependency "power" 00:01:24.986 Message: lib/reorder: Defining dependency "reorder" 00:01:24.986 Message: lib/security: Defining dependency "security" 00:01:24.986 Has header "linux/userfaultfd.h" : YES 00:01:24.986 Has header "linux/vduse.h" : YES 00:01:24.986 Message: lib/vhost: Defining dependency "vhost" 00:01:24.986 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:24.986 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:24.986 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:24.986 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:24.986 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:24.986 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:24.986 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:24.986 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:24.986 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:24.986 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:24.986 Program doxygen found: YES (/usr/bin/doxygen) 00:01:24.986 Configuring doxy-api-html.conf using configuration 00:01:24.986 Configuring doxy-api-man.conf using configuration 00:01:24.986 Program mandb found: YES (/usr/bin/mandb) 00:01:24.986 Program sphinx-build found: NO 00:01:24.986 Configuring rte_build_config.h using configuration 00:01:24.986 Message: 00:01:24.986 ================= 00:01:24.986 Applications Enabled 00:01:24.986 ================= 00:01:24.986 00:01:24.986 apps: 00:01:24.986 00:01:24.986 00:01:24.986 Message: 00:01:24.986 ================= 00:01:24.986 Libraries Enabled 00:01:24.986 ================= 00:01:24.986 00:01:24.986 libs: 00:01:24.986 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:24.986 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:24.986 cryptodev, dmadev, power, reorder, security, vhost, 00:01:24.986 00:01:24.986 Message: 00:01:24.986 =============== 00:01:24.986 Drivers Enabled 00:01:24.986 =============== 00:01:24.986 00:01:24.986 common: 00:01:24.986 00:01:24.986 bus: 00:01:24.986 pci, vdev, 00:01:24.986 mempool: 00:01:24.986 ring, 00:01:24.986 dma: 00:01:24.986 00:01:24.986 net: 00:01:24.986 00:01:24.986 crypto: 00:01:24.986 00:01:24.986 compress: 00:01:24.986 00:01:24.986 vdpa: 00:01:24.986 00:01:24.986 00:01:24.986 Message: 00:01:24.986 ================= 00:01:24.986 Content Skipped 00:01:24.986 ================= 00:01:24.986 00:01:24.986 apps: 00:01:24.986 dumpcap: explicitly disabled via build config 00:01:24.986 graph: explicitly disabled via build config 00:01:24.986 pdump: explicitly disabled via build config 00:01:24.986 proc-info: explicitly disabled via build config 00:01:24.986 test-acl: explicitly disabled via build config 00:01:24.986 test-bbdev: explicitly disabled via build config 00:01:24.986 test-cmdline: explicitly disabled via build config 00:01:24.986 test-compress-perf: explicitly disabled via build config 00:01:24.986 test-crypto-perf: explicitly disabled via build config 00:01:24.986 test-dma-perf: explicitly disabled via build config 00:01:24.986 test-eventdev: explicitly disabled via build config 00:01:24.986 test-fib: explicitly disabled via build config 00:01:24.986 test-flow-perf: explicitly disabled via build config 00:01:24.986 test-gpudev: explicitly disabled via build config 00:01:24.986 test-mldev: explicitly disabled via build config 00:01:24.986 test-pipeline: explicitly disabled via build config 00:01:24.986 test-pmd: explicitly disabled via build config 00:01:24.986 test-regex: explicitly disabled via build config 00:01:24.986 test-sad: explicitly disabled via build config 00:01:24.986 test-security-perf: explicitly disabled via build config 00:01:24.986 00:01:24.986 libs: 00:01:24.986 metrics: explicitly disabled via build config 00:01:24.986 acl: explicitly disabled via build config 00:01:24.986 bbdev: explicitly disabled via build config 00:01:24.986 bitratestats: explicitly disabled via build config 00:01:24.986 bpf: explicitly disabled via build config 00:01:24.986 cfgfile: explicitly disabled via build config 00:01:24.986 distributor: explicitly disabled via build config 00:01:24.986 efd: explicitly disabled via build config 00:01:24.986 eventdev: explicitly disabled via build config 00:01:24.986 dispatcher: explicitly disabled via build config 00:01:24.986 gpudev: explicitly disabled via build config 00:01:24.986 gro: explicitly disabled via build config 00:01:24.986 gso: explicitly disabled via build config 00:01:24.986 ip_frag: explicitly disabled via build config 00:01:24.986 jobstats: explicitly disabled via build config 00:01:24.986 latencystats: explicitly disabled via build config 00:01:24.986 lpm: explicitly disabled via build config 00:01:24.986 member: explicitly disabled via build config 00:01:24.986 pcapng: explicitly disabled via build config 00:01:24.986 rawdev: explicitly disabled via build config 00:01:24.986 regexdev: explicitly disabled via build config 00:01:24.986 mldev: explicitly disabled via build config 00:01:24.986 rib: explicitly disabled via build config 00:01:24.987 sched: explicitly disabled via build config 00:01:24.987 stack: explicitly disabled via build config 00:01:24.987 ipsec: explicitly disabled via build config 00:01:24.987 pdcp: explicitly disabled via build config 00:01:24.987 fib: explicitly disabled via build config 00:01:24.987 port: explicitly disabled via build config 00:01:24.987 pdump: explicitly disabled via build config 00:01:24.987 table: explicitly disabled via build config 00:01:24.987 pipeline: explicitly disabled via build config 00:01:24.987 graph: explicitly disabled via build config 00:01:24.987 node: explicitly disabled via build config 00:01:24.987 00:01:24.987 drivers: 00:01:24.987 common/cpt: not in enabled drivers build config 00:01:24.987 common/dpaax: not in enabled drivers build config 00:01:24.987 common/iavf: not in enabled drivers build config 00:01:24.987 common/idpf: not in enabled drivers build config 00:01:24.987 common/mvep: not in enabled drivers build config 00:01:24.987 common/octeontx: not in enabled drivers build config 00:01:24.987 bus/auxiliary: not in enabled drivers build config 00:01:24.987 bus/cdx: not in enabled drivers build config 00:01:24.987 bus/dpaa: not in enabled drivers build config 00:01:24.987 bus/fslmc: not in enabled drivers build config 00:01:24.987 bus/ifpga: not in enabled drivers build config 00:01:24.987 bus/platform: not in enabled drivers build config 00:01:24.987 bus/vmbus: not in enabled drivers build config 00:01:24.987 common/cnxk: not in enabled drivers build config 00:01:24.987 common/mlx5: not in enabled drivers build config 00:01:24.987 common/nfp: not in enabled drivers build config 00:01:24.987 common/qat: not in enabled drivers build config 00:01:24.987 common/sfc_efx: not in enabled drivers build config 00:01:24.987 mempool/bucket: not in enabled drivers build config 00:01:24.987 mempool/cnxk: not in enabled drivers build config 00:01:24.987 mempool/dpaa: not in enabled drivers build config 00:01:24.987 mempool/dpaa2: not in enabled drivers build config 00:01:24.987 mempool/octeontx: not in enabled drivers build config 00:01:24.987 mempool/stack: not in enabled drivers build config 00:01:24.987 dma/cnxk: not in enabled drivers build config 00:01:24.987 dma/dpaa: not in enabled drivers build config 00:01:24.987 dma/dpaa2: not in enabled drivers build config 00:01:24.987 dma/hisilicon: not in enabled drivers build config 00:01:24.987 dma/idxd: not in enabled drivers build config 00:01:24.987 dma/ioat: not in enabled drivers build config 00:01:24.987 dma/skeleton: not in enabled drivers build config 00:01:24.987 net/af_packet: not in enabled drivers build config 00:01:24.987 net/af_xdp: not in enabled drivers build config 00:01:24.987 net/ark: not in enabled drivers build config 00:01:24.987 net/atlantic: not in enabled drivers build config 00:01:24.987 net/avp: not in enabled drivers build config 00:01:24.987 net/axgbe: not in enabled drivers build config 00:01:24.987 net/bnx2x: not in enabled drivers build config 00:01:24.987 net/bnxt: not in enabled drivers build config 00:01:24.987 net/bonding: not in enabled drivers build config 00:01:24.987 net/cnxk: not in enabled drivers build config 00:01:24.987 net/cpfl: not in enabled drivers build config 00:01:24.987 net/cxgbe: not in enabled drivers build config 00:01:24.987 net/dpaa: not in enabled drivers build config 00:01:24.987 net/dpaa2: not in enabled drivers build config 00:01:24.987 net/e1000: not in enabled drivers build config 00:01:24.987 net/ena: not in enabled drivers build config 00:01:24.987 net/enetc: not in enabled drivers build config 00:01:24.987 net/enetfec: not in enabled drivers build config 00:01:24.987 net/enic: not in enabled drivers build config 00:01:24.987 net/failsafe: not in enabled drivers build config 00:01:24.987 net/fm10k: not in enabled drivers build config 00:01:24.987 net/gve: not in enabled drivers build config 00:01:24.987 net/hinic: not in enabled drivers build config 00:01:24.987 net/hns3: not in enabled drivers build config 00:01:24.987 net/i40e: not in enabled drivers build config 00:01:24.987 net/iavf: not in enabled drivers build config 00:01:24.987 net/ice: not in enabled drivers build config 00:01:24.987 net/idpf: not in enabled drivers build config 00:01:24.987 net/igc: not in enabled drivers build config 00:01:24.987 net/ionic: not in enabled drivers build config 00:01:24.987 net/ipn3ke: not in enabled drivers build config 00:01:24.987 net/ixgbe: not in enabled drivers build config 00:01:24.987 net/mana: not in enabled drivers build config 00:01:24.987 net/memif: not in enabled drivers build config 00:01:24.987 net/mlx4: not in enabled drivers build config 00:01:24.987 net/mlx5: not in enabled drivers build config 00:01:24.987 net/mvneta: not in enabled drivers build config 00:01:24.987 net/mvpp2: not in enabled drivers build config 00:01:24.987 net/netvsc: not in enabled drivers build config 00:01:24.987 net/nfb: not in enabled drivers build config 00:01:24.987 net/nfp: not in enabled drivers build config 00:01:24.987 net/ngbe: not in enabled drivers build config 00:01:24.987 net/null: not in enabled drivers build config 00:01:24.987 net/octeontx: not in enabled drivers build config 00:01:24.987 net/octeon_ep: not in enabled drivers build config 00:01:24.987 net/pcap: not in enabled drivers build config 00:01:24.987 net/pfe: not in enabled drivers build config 00:01:24.987 net/qede: not in enabled drivers build config 00:01:24.987 net/ring: not in enabled drivers build config 00:01:24.987 net/sfc: not in enabled drivers build config 00:01:24.987 net/softnic: not in enabled drivers build config 00:01:24.987 net/tap: not in enabled drivers build config 00:01:24.987 net/thunderx: not in enabled drivers build config 00:01:24.987 net/txgbe: not in enabled drivers build config 00:01:24.987 net/vdev_netvsc: not in enabled drivers build config 00:01:24.987 net/vhost: not in enabled drivers build config 00:01:24.987 net/virtio: not in enabled drivers build config 00:01:24.987 net/vmxnet3: not in enabled drivers build config 00:01:24.987 raw/*: missing internal dependency, "rawdev" 00:01:24.987 crypto/armv8: not in enabled drivers build config 00:01:24.987 crypto/bcmfs: not in enabled drivers build config 00:01:24.987 crypto/caam_jr: not in enabled drivers build config 00:01:24.987 crypto/ccp: not in enabled drivers build config 00:01:24.987 crypto/cnxk: not in enabled drivers build config 00:01:24.987 crypto/dpaa_sec: not in enabled drivers build config 00:01:24.987 crypto/dpaa2_sec: not in enabled drivers build config 00:01:24.987 crypto/ipsec_mb: not in enabled drivers build config 00:01:24.987 crypto/mlx5: not in enabled drivers build config 00:01:24.987 crypto/mvsam: not in enabled drivers build config 00:01:24.987 crypto/nitrox: not in enabled drivers build config 00:01:24.987 crypto/null: not in enabled drivers build config 00:01:24.987 crypto/octeontx: not in enabled drivers build config 00:01:24.987 crypto/openssl: not in enabled drivers build config 00:01:24.987 crypto/scheduler: not in enabled drivers build config 00:01:24.987 crypto/uadk: not in enabled drivers build config 00:01:24.987 crypto/virtio: not in enabled drivers build config 00:01:24.987 compress/isal: not in enabled drivers build config 00:01:24.987 compress/mlx5: not in enabled drivers build config 00:01:24.987 compress/octeontx: not in enabled drivers build config 00:01:24.987 compress/zlib: not in enabled drivers build config 00:01:24.987 regex/*: missing internal dependency, "regexdev" 00:01:24.987 ml/*: missing internal dependency, "mldev" 00:01:24.987 vdpa/ifc: not in enabled drivers build config 00:01:24.987 vdpa/mlx5: not in enabled drivers build config 00:01:24.987 vdpa/nfp: not in enabled drivers build config 00:01:24.987 vdpa/sfc: not in enabled drivers build config 00:01:24.987 event/*: missing internal dependency, "eventdev" 00:01:24.987 baseband/*: missing internal dependency, "bbdev" 00:01:24.987 gpu/*: missing internal dependency, "gpudev" 00:01:24.987 00:01:24.987 00:01:24.987 Build targets in project: 85 00:01:24.987 00:01:24.987 DPDK 23.11.0 00:01:24.987 00:01:24.987 User defined options 00:01:24.987 buildtype : debug 00:01:24.987 default_library : shared 00:01:24.987 libdir : lib 00:01:24.987 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:24.987 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:24.987 c_link_args : 00:01:24.987 cpu_instruction_set: native 00:01:24.987 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:24.987 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:24.987 enable_docs : false 00:01:24.987 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:24.987 enable_kmods : false 00:01:24.987 tests : false 00:01:24.987 00:01:24.987 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.987 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:24.987 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:24.987 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:24.987 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:24.987 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:24.987 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:24.987 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:24.987 [7/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:24.987 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:24.987 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:24.987 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:24.987 [11/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:24.987 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:24.987 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:24.987 [14/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:24.987 [15/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:24.987 [16/265] Linking static target lib/librte_kvargs.a 00:01:24.987 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:24.987 [18/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:24.987 [19/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:24.987 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:24.987 [21/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:24.987 [22/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:24.987 [23/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:24.987 [24/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:24.988 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:24.988 [26/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:24.988 [27/265] Linking static target lib/librte_pci.a 00:01:24.988 [28/265] Linking static target lib/librte_log.a 00:01:24.988 [29/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:24.988 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:24.988 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:24.988 [32/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:24.988 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:24.988 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:24.988 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:25.252 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:25.252 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:25.252 [38/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:25.252 [39/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:25.252 [40/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:25.252 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:25.252 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:25.252 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:25.252 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:25.252 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:25.252 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:25.252 [47/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:25.252 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:25.252 [49/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:25.252 [50/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:25.252 [51/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:25.252 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:25.252 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:25.252 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:25.252 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:25.252 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:25.513 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:25.513 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:25.513 [59/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:25.513 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:25.513 [61/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.513 [62/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:25.513 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:25.513 [64/265] Linking static target lib/librte_meter.a 00:01:25.513 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:25.513 [66/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:25.513 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:25.513 [68/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:25.513 [69/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:25.513 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:25.513 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:25.513 [72/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:25.513 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:25.513 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:25.513 [75/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:25.513 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:25.513 [77/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:25.513 [78/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:25.513 [79/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:25.513 [80/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:25.513 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:25.513 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:25.513 [83/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.513 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:25.513 [85/265] Linking static target lib/librte_telemetry.a 00:01:25.513 [86/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:25.513 [87/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:25.513 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:25.513 [89/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:25.513 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:25.513 [91/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:25.513 [92/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:25.513 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:25.513 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:25.513 [95/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:25.513 [96/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:25.513 [97/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:25.513 [98/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:25.513 [99/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:25.513 [100/265] Linking static target lib/librte_cmdline.a 00:01:25.513 [101/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:25.513 [102/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:25.513 [103/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:25.513 [104/265] Linking static target lib/librte_timer.a 00:01:25.513 [105/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:25.513 [106/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:25.513 [107/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:25.513 [108/265] Linking static target lib/librte_ring.a 00:01:25.513 [109/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:25.513 [110/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:25.514 [111/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:25.514 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:25.514 [113/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:25.514 [114/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:25.514 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:25.514 [116/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:25.514 [117/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:25.514 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:25.514 [119/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:25.514 [120/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:25.514 [121/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:25.514 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:25.514 [123/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:25.514 [124/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:25.514 [125/265] Linking static target lib/librte_mempool.a 00:01:25.514 [126/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:25.514 [127/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:25.514 [128/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:25.514 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:25.514 [130/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:25.514 [131/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:25.514 [132/265] Linking static target lib/librte_rcu.a 00:01:25.514 [133/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:25.514 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:25.514 [135/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:25.514 [136/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:25.514 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:25.514 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:25.514 [139/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:25.514 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:25.514 [141/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:25.514 [142/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:25.514 [143/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:25.514 [144/265] Linking static target lib/librte_compressdev.a 00:01:25.514 [145/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:25.514 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:25.514 [147/265] Linking static target lib/librte_net.a 00:01:25.514 [148/265] Linking static target lib/librte_eal.a 00:01:25.514 [149/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:25.514 [150/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:25.514 [151/265] Linking static target lib/librte_dmadev.a 00:01:25.514 [152/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:25.514 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:25.514 [154/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.514 [155/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:25.514 [156/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.773 [157/265] Linking static target lib/librte_power.a 00:01:25.773 [158/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:25.773 [159/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:25.773 [160/265] Linking static target lib/librte_reorder.a 00:01:25.773 [161/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:25.773 [162/265] Linking target lib/librte_log.so.24.0 00:01:25.773 [163/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:25.773 [164/265] Linking static target lib/librte_security.a 00:01:25.773 [165/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:25.773 [166/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:25.773 [167/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:25.773 [168/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:25.773 [169/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:25.773 [170/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:25.773 [171/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:25.773 [172/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.773 [173/265] Linking static target lib/librte_mbuf.a 00:01:25.773 [174/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:25.773 [175/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:25.773 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:25.773 [177/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.773 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:25.773 [179/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.773 [180/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:25.773 [181/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.773 [182/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:25.773 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:25.773 [184/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.773 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:25.773 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:25.773 [187/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:25.773 [188/265] Linking target lib/librte_kvargs.so.24.0 00:01:25.773 [189/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:26.032 [190/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:26.032 [191/265] Linking target lib/librte_telemetry.so.24.0 00:01:26.032 [192/265] Linking static target lib/librte_cryptodev.a 00:01:26.032 [193/265] Linking static target lib/librte_hash.a 00:01:26.032 [194/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:26.032 [195/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:26.032 [196/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:26.032 [197/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:26.032 [198/265] Linking static target drivers/librte_bus_vdev.a 00:01:26.032 [199/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:26.032 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:26.032 [201/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:26.032 [202/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:26.032 [203/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.032 [204/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.032 [205/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:26.032 [206/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:26.032 [207/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:26.032 [208/265] Linking static target drivers/librte_bus_pci.a 00:01:26.032 [209/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:26.032 [210/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:26.032 [211/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:26.032 [212/265] Linking static target drivers/librte_mempool_ring.a 00:01:26.291 [213/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.291 [214/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.291 [215/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.291 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.291 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:26.550 [218/265] Linking static target lib/librte_ethdev.a 00:01:26.550 [219/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.550 [220/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:26.550 [221/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.550 [222/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.550 [223/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.807 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.375 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:27.375 [226/265] Linking static target lib/librte_vhost.a 00:01:27.636 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.548 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.827 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.398 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.658 [231/265] Linking target lib/librte_eal.so.24.0 00:01:35.658 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:35.658 [233/265] Linking target lib/librte_ring.so.24.0 00:01:35.658 [234/265] Linking target lib/librte_pci.so.24.0 00:01:35.658 [235/265] Linking target lib/librte_meter.so.24.0 00:01:35.658 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:35.658 [237/265] Linking target lib/librte_timer.so.24.0 00:01:35.658 [238/265] Linking target lib/librte_dmadev.so.24.0 00:01:35.917 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:35.917 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:35.917 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:35.917 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:35.917 [243/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:35.917 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:35.917 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:35.917 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:35.918 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:35.918 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:36.180 [249/265] Linking target lib/librte_mbuf.so.24.0 00:01:36.180 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:36.180 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:36.180 [252/265] Linking target lib/librte_compressdev.so.24.0 00:01:36.180 [253/265] Linking target lib/librte_net.so.24.0 00:01:36.180 [254/265] Linking target lib/librte_reorder.so.24.0 00:01:36.180 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:36.439 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:36.439 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:36.439 [258/265] Linking target lib/librte_cmdline.so.24.0 00:01:36.439 [259/265] Linking target lib/librte_security.so.24.0 00:01:36.439 [260/265] Linking target lib/librte_hash.so.24.0 00:01:36.439 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:36.439 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:36.439 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:36.698 [264/265] Linking target lib/librte_power.so.24.0 00:01:36.698 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:36.698 INFO: autodetecting backend as ninja 00:01:36.698 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:37.649 CC lib/log/log_flags.o 00:01:37.649 CC lib/log/log.o 00:01:37.649 CC lib/log/log_deprecated.o 00:01:37.649 CC lib/ut_mock/mock.o 00:01:37.649 CC lib/ut/ut.o 00:01:37.649 LIB libspdk_ut_mock.a 00:01:37.649 LIB libspdk_log.a 00:01:37.649 LIB libspdk_ut.a 00:01:37.649 SO libspdk_ut_mock.so.6.0 00:01:37.649 SO libspdk_log.so.7.0 00:01:37.649 SO libspdk_ut.so.2.0 00:01:37.908 SYMLINK libspdk_ut_mock.so 00:01:37.908 SYMLINK libspdk_ut.so 00:01:37.908 SYMLINK libspdk_log.so 00:01:38.166 CC lib/dma/dma.o 00:01:38.166 CXX lib/trace_parser/trace.o 00:01:38.166 CC lib/ioat/ioat.o 00:01:38.166 CC lib/util/base64.o 00:01:38.166 CC lib/util/bit_array.o 00:01:38.166 CC lib/util/cpuset.o 00:01:38.166 CC lib/util/crc32.o 00:01:38.166 CC lib/util/crc16.o 00:01:38.166 CC lib/util/crc32c.o 00:01:38.166 CC lib/util/crc32_ieee.o 00:01:38.166 CC lib/util/crc64.o 00:01:38.166 CC lib/util/dif.o 00:01:38.166 CC lib/util/fd.o 00:01:38.166 CC lib/util/file.o 00:01:38.166 CC lib/util/hexlify.o 00:01:38.166 CC lib/util/iov.o 00:01:38.166 CC lib/util/math.o 00:01:38.166 CC lib/util/pipe.o 00:01:38.166 CC lib/util/strerror_tls.o 00:01:38.166 CC lib/util/string.o 00:01:38.166 CC lib/util/uuid.o 00:01:38.166 CC lib/util/fd_group.o 00:01:38.166 CC lib/util/xor.o 00:01:38.166 CC lib/util/zipf.o 00:01:38.166 LIB libspdk_dma.a 00:01:38.166 CC lib/vfio_user/host/vfio_user.o 00:01:38.166 CC lib/vfio_user/host/vfio_user_pci.o 00:01:38.425 SO libspdk_dma.so.4.0 00:01:38.425 LIB libspdk_ioat.a 00:01:38.425 SYMLINK libspdk_dma.so 00:01:38.425 SO libspdk_ioat.so.7.0 00:01:38.425 SYMLINK libspdk_ioat.so 00:01:38.425 LIB libspdk_vfio_user.a 00:01:38.425 SO libspdk_vfio_user.so.5.0 00:01:38.425 LIB libspdk_util.a 00:01:38.425 SYMLINK libspdk_vfio_user.so 00:01:38.685 SO libspdk_util.so.9.0 00:01:38.685 SYMLINK libspdk_util.so 00:01:38.944 CC lib/vmd/vmd.o 00:01:38.944 CC lib/vmd/led.o 00:01:38.944 CC lib/idxd/idxd.o 00:01:38.944 CC lib/idxd/idxd_user.o 00:01:38.944 CC lib/rdma/common.o 00:01:38.944 CC lib/rdma/rdma_verbs.o 00:01:38.944 CC lib/json/json_parse.o 00:01:38.944 CC lib/json/json_util.o 00:01:38.944 CC lib/conf/conf.o 00:01:38.944 CC lib/json/json_write.o 00:01:38.944 CC lib/env_dpdk/env.o 00:01:38.944 CC lib/env_dpdk/memory.o 00:01:38.944 CC lib/env_dpdk/pci.o 00:01:38.944 CC lib/env_dpdk/init.o 00:01:38.944 CC lib/env_dpdk/threads.o 00:01:38.944 CC lib/env_dpdk/pci_ioat.o 00:01:38.944 CC lib/env_dpdk/pci_virtio.o 00:01:38.944 CC lib/env_dpdk/pci_vmd.o 00:01:38.944 CC lib/env_dpdk/pci_idxd.o 00:01:38.944 CC lib/env_dpdk/pci_event.o 00:01:38.944 CC lib/env_dpdk/pci_dpdk.o 00:01:38.944 CC lib/env_dpdk/sigbus_handler.o 00:01:38.944 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:38.944 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:39.203 LIB libspdk_conf.a 00:01:39.203 LIB libspdk_json.a 00:01:39.203 SO libspdk_conf.so.6.0 00:01:39.203 LIB libspdk_rdma.a 00:01:39.203 SYMLINK libspdk_conf.so 00:01:39.203 SO libspdk_json.so.6.0 00:01:39.203 SO libspdk_rdma.so.6.0 00:01:39.203 SYMLINK libspdk_json.so 00:01:39.203 SYMLINK libspdk_rdma.so 00:01:39.462 LIB libspdk_idxd.a 00:01:39.462 SO libspdk_idxd.so.12.0 00:01:39.462 LIB libspdk_vmd.a 00:01:39.462 SYMLINK libspdk_idxd.so 00:01:39.462 SO libspdk_vmd.so.6.0 00:01:39.462 LIB libspdk_trace_parser.a 00:01:39.462 SYMLINK libspdk_vmd.so 00:01:39.462 SO libspdk_trace_parser.so.5.0 00:01:39.462 CC lib/jsonrpc/jsonrpc_server.o 00:01:39.462 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:39.462 CC lib/jsonrpc/jsonrpc_client.o 00:01:39.462 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:39.721 SYMLINK libspdk_trace_parser.so 00:01:39.721 LIB libspdk_jsonrpc.a 00:01:39.721 SO libspdk_jsonrpc.so.6.0 00:01:39.980 SYMLINK libspdk_jsonrpc.so 00:01:39.980 LIB libspdk_env_dpdk.a 00:01:39.980 SO libspdk_env_dpdk.so.14.0 00:01:39.980 SYMLINK libspdk_env_dpdk.so 00:01:40.239 CC lib/rpc/rpc.o 00:01:40.239 LIB libspdk_rpc.a 00:01:40.239 SO libspdk_rpc.so.6.0 00:01:40.499 SYMLINK libspdk_rpc.so 00:01:40.758 CC lib/notify/notify.o 00:01:40.758 CC lib/notify/notify_rpc.o 00:01:40.758 CC lib/keyring/keyring.o 00:01:40.758 CC lib/keyring/keyring_rpc.o 00:01:40.758 CC lib/trace/trace.o 00:01:40.758 CC lib/trace/trace_flags.o 00:01:40.758 CC lib/trace/trace_rpc.o 00:01:40.758 LIB libspdk_notify.a 00:01:40.759 SO libspdk_notify.so.6.0 00:01:41.016 LIB libspdk_keyring.a 00:01:41.016 LIB libspdk_trace.a 00:01:41.016 SYMLINK libspdk_notify.so 00:01:41.016 SO libspdk_keyring.so.1.0 00:01:41.016 SO libspdk_trace.so.10.0 00:01:41.016 SYMLINK libspdk_keyring.so 00:01:41.016 SYMLINK libspdk_trace.so 00:01:41.275 CC lib/thread/thread.o 00:01:41.275 CC lib/thread/iobuf.o 00:01:41.275 CC lib/sock/sock.o 00:01:41.275 CC lib/sock/sock_rpc.o 00:01:41.533 LIB libspdk_sock.a 00:01:41.533 SO libspdk_sock.so.9.0 00:01:41.791 SYMLINK libspdk_sock.so 00:01:42.050 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:42.050 CC lib/nvme/nvme_ctrlr.o 00:01:42.050 CC lib/nvme/nvme_fabric.o 00:01:42.050 CC lib/nvme/nvme_ns_cmd.o 00:01:42.050 CC lib/nvme/nvme_ns.o 00:01:42.050 CC lib/nvme/nvme_pcie_common.o 00:01:42.050 CC lib/nvme/nvme_pcie.o 00:01:42.050 CC lib/nvme/nvme_qpair.o 00:01:42.050 CC lib/nvme/nvme.o 00:01:42.050 CC lib/nvme/nvme_quirks.o 00:01:42.050 CC lib/nvme/nvme_transport.o 00:01:42.050 CC lib/nvme/nvme_discovery.o 00:01:42.050 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:42.051 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:42.051 CC lib/nvme/nvme_tcp.o 00:01:42.051 CC lib/nvme/nvme_opal.o 00:01:42.051 CC lib/nvme/nvme_io_msg.o 00:01:42.051 CC lib/nvme/nvme_poll_group.o 00:01:42.051 CC lib/nvme/nvme_zns.o 00:01:42.051 CC lib/nvme/nvme_stubs.o 00:01:42.051 CC lib/nvme/nvme_auth.o 00:01:42.051 CC lib/nvme/nvme_cuse.o 00:01:42.051 CC lib/nvme/nvme_rdma.o 00:01:42.309 LIB libspdk_thread.a 00:01:42.309 SO libspdk_thread.so.10.0 00:01:42.309 SYMLINK libspdk_thread.so 00:01:42.568 CC lib/accel/accel.o 00:01:42.568 CC lib/accel/accel_rpc.o 00:01:42.568 CC lib/accel/accel_sw.o 00:01:42.568 CC lib/virtio/virtio.o 00:01:42.568 CC lib/virtio/virtio_vhost_user.o 00:01:42.568 CC lib/init/json_config.o 00:01:42.568 CC lib/init/subsystem.o 00:01:42.568 CC lib/virtio/virtio_vfio_user.o 00:01:42.568 CC lib/init/subsystem_rpc.o 00:01:42.568 CC lib/virtio/virtio_pci.o 00:01:42.568 CC lib/init/rpc.o 00:01:42.568 CC lib/blob/blobstore.o 00:01:42.568 CC lib/blob/request.o 00:01:42.568 CC lib/blob/zeroes.o 00:01:42.568 CC lib/blob/blob_bs_dev.o 00:01:42.828 LIB libspdk_init.a 00:01:42.828 SO libspdk_init.so.5.0 00:01:42.828 LIB libspdk_virtio.a 00:01:42.828 SYMLINK libspdk_init.so 00:01:42.828 SO libspdk_virtio.so.7.0 00:01:43.086 SYMLINK libspdk_virtio.so 00:01:43.086 CC lib/event/app.o 00:01:43.086 CC lib/event/reactor.o 00:01:43.086 CC lib/event/log_rpc.o 00:01:43.086 CC lib/event/app_rpc.o 00:01:43.086 CC lib/event/scheduler_static.o 00:01:43.345 LIB libspdk_accel.a 00:01:43.345 SO libspdk_accel.so.15.0 00:01:43.345 SYMLINK libspdk_accel.so 00:01:43.345 LIB libspdk_nvme.a 00:01:43.345 SO libspdk_nvme.so.13.0 00:01:43.345 LIB libspdk_event.a 00:01:43.605 SO libspdk_event.so.13.0 00:01:43.605 SYMLINK libspdk_event.so 00:01:43.605 CC lib/bdev/bdev.o 00:01:43.605 CC lib/bdev/bdev_rpc.o 00:01:43.605 CC lib/bdev/bdev_zone.o 00:01:43.605 CC lib/bdev/part.o 00:01:43.605 CC lib/bdev/scsi_nvme.o 00:01:43.605 SYMLINK libspdk_nvme.so 00:01:44.543 LIB libspdk_blob.a 00:01:44.543 SO libspdk_blob.so.11.0 00:01:44.543 SYMLINK libspdk_blob.so 00:01:44.802 CC lib/blobfs/blobfs.o 00:01:44.802 CC lib/blobfs/tree.o 00:01:44.802 CC lib/lvol/lvol.o 00:01:45.061 LIB libspdk_bdev.a 00:01:45.321 SO libspdk_bdev.so.15.0 00:01:45.321 LIB libspdk_blobfs.a 00:01:45.321 LIB libspdk_lvol.a 00:01:45.321 SYMLINK libspdk_bdev.so 00:01:45.321 SO libspdk_blobfs.so.10.0 00:01:45.321 SO libspdk_lvol.so.10.0 00:01:45.321 SYMLINK libspdk_blobfs.so 00:01:45.321 SYMLINK libspdk_lvol.so 00:01:45.580 CC lib/nvmf/ctrlr.o 00:01:45.580 CC lib/nvmf/ctrlr_bdev.o 00:01:45.580 CC lib/nvmf/ctrlr_discovery.o 00:01:45.580 CC lib/nvmf/subsystem.o 00:01:45.580 CC lib/nvmf/nvmf.o 00:01:45.580 CC lib/nbd/nbd.o 00:01:45.580 CC lib/nvmf/nvmf_rpc.o 00:01:45.580 CC lib/nbd/nbd_rpc.o 00:01:45.580 CC lib/nvmf/transport.o 00:01:45.580 CC lib/nvmf/tcp.o 00:01:45.580 CC lib/ftl/ftl_core.o 00:01:45.580 CC lib/ftl/ftl_init.o 00:01:45.580 CC lib/nvmf/rdma.o 00:01:45.580 CC lib/ftl/ftl_layout.o 00:01:45.580 CC lib/scsi/dev.o 00:01:45.580 CC lib/ftl/ftl_debug.o 00:01:45.580 CC lib/ublk/ublk.o 00:01:45.580 CC lib/scsi/lun.o 00:01:45.580 CC lib/ublk/ublk_rpc.o 00:01:45.580 CC lib/ftl/ftl_io.o 00:01:45.580 CC lib/scsi/scsi.o 00:01:45.580 CC lib/scsi/port.o 00:01:45.580 CC lib/ftl/ftl_sb.o 00:01:45.580 CC lib/scsi/scsi_bdev.o 00:01:45.580 CC lib/ftl/ftl_l2p.o 00:01:45.580 CC lib/scsi/scsi_pr.o 00:01:45.580 CC lib/ftl/ftl_l2p_flat.o 00:01:45.580 CC lib/ftl/ftl_nv_cache.o 00:01:45.580 CC lib/scsi/scsi_rpc.o 00:01:45.580 CC lib/ftl/ftl_band.o 00:01:45.580 CC lib/ftl/ftl_band_ops.o 00:01:45.580 CC lib/scsi/task.o 00:01:45.580 CC lib/ftl/ftl_writer.o 00:01:45.580 CC lib/ftl/ftl_rq.o 00:01:45.580 CC lib/ftl/ftl_reloc.o 00:01:45.580 CC lib/ftl/ftl_l2p_cache.o 00:01:45.580 CC lib/ftl/ftl_p2l.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:45.580 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:45.580 CC lib/ftl/utils/ftl_conf.o 00:01:45.580 CC lib/ftl/utils/ftl_mempool.o 00:01:45.580 CC lib/ftl/utils/ftl_md.o 00:01:45.580 CC lib/ftl/utils/ftl_property.o 00:01:45.580 CC lib/ftl/utils/ftl_bitmap.o 00:01:45.580 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:45.580 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:45.580 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:45.580 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:45.580 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:45.580 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:45.580 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:45.580 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:45.580 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:45.580 CC lib/ftl/base/ftl_base_dev.o 00:01:45.580 CC lib/ftl/base/ftl_base_bdev.o 00:01:45.580 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:45.580 CC lib/ftl/ftl_trace.o 00:01:46.153 LIB libspdk_nbd.a 00:01:46.153 SO libspdk_nbd.so.7.0 00:01:46.153 LIB libspdk_scsi.a 00:01:46.153 SYMLINK libspdk_nbd.so 00:01:46.153 SO libspdk_scsi.so.9.0 00:01:46.153 LIB libspdk_ublk.a 00:01:46.153 SO libspdk_ublk.so.3.0 00:01:46.153 SYMLINK libspdk_scsi.so 00:01:46.411 SYMLINK libspdk_ublk.so 00:01:46.411 LIB libspdk_ftl.a 00:01:46.411 SO libspdk_ftl.so.9.0 00:01:46.411 CC lib/iscsi/conn.o 00:01:46.411 CC lib/iscsi/init_grp.o 00:01:46.411 CC lib/iscsi/iscsi.o 00:01:46.411 CC lib/iscsi/md5.o 00:01:46.411 CC lib/iscsi/param.o 00:01:46.411 CC lib/iscsi/portal_grp.o 00:01:46.411 CC lib/iscsi/tgt_node.o 00:01:46.411 CC lib/iscsi/iscsi_subsystem.o 00:01:46.411 CC lib/iscsi/iscsi_rpc.o 00:01:46.411 CC lib/iscsi/task.o 00:01:46.411 CC lib/vhost/vhost.o 00:01:46.411 CC lib/vhost/vhost_rpc.o 00:01:46.411 CC lib/vhost/vhost_scsi.o 00:01:46.411 CC lib/vhost/vhost_blk.o 00:01:46.411 CC lib/vhost/rte_vhost_user.o 00:01:46.670 SYMLINK libspdk_ftl.so 00:01:46.929 LIB libspdk_nvmf.a 00:01:47.188 SO libspdk_nvmf.so.18.0 00:01:47.188 SYMLINK libspdk_nvmf.so 00:01:47.188 LIB libspdk_vhost.a 00:01:47.188 SO libspdk_vhost.so.8.0 00:01:47.447 SYMLINK libspdk_vhost.so 00:01:47.447 LIB libspdk_iscsi.a 00:01:47.447 SO libspdk_iscsi.so.8.0 00:01:47.708 SYMLINK libspdk_iscsi.so 00:01:47.967 CC module/env_dpdk/env_dpdk_rpc.o 00:01:48.227 LIB libspdk_env_dpdk_rpc.a 00:01:48.227 CC module/sock/posix/posix.o 00:01:48.227 CC module/blob/bdev/blob_bdev.o 00:01:48.227 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:48.227 CC module/scheduler/gscheduler/gscheduler.o 00:01:48.227 CC module/accel/dsa/accel_dsa.o 00:01:48.227 CC module/keyring/file/keyring.o 00:01:48.227 CC module/accel/dsa/accel_dsa_rpc.o 00:01:48.227 CC module/keyring/file/keyring_rpc.o 00:01:48.227 CC module/accel/iaa/accel_iaa.o 00:01:48.227 CC module/accel/error/accel_error.o 00:01:48.227 CC module/accel/iaa/accel_iaa_rpc.o 00:01:48.227 CC module/accel/error/accel_error_rpc.o 00:01:48.227 CC module/accel/ioat/accel_ioat.o 00:01:48.227 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:48.227 CC module/accel/ioat/accel_ioat_rpc.o 00:01:48.227 SO libspdk_env_dpdk_rpc.so.6.0 00:01:48.227 SYMLINK libspdk_env_dpdk_rpc.so 00:01:48.227 LIB libspdk_scheduler_gscheduler.a 00:01:48.227 LIB libspdk_keyring_file.a 00:01:48.227 LIB libspdk_scheduler_dpdk_governor.a 00:01:48.227 SO libspdk_scheduler_gscheduler.so.4.0 00:01:48.487 LIB libspdk_scheduler_dynamic.a 00:01:48.487 SO libspdk_keyring_file.so.1.0 00:01:48.487 LIB libspdk_accel_error.a 00:01:48.487 LIB libspdk_accel_iaa.a 00:01:48.487 LIB libspdk_accel_dsa.a 00:01:48.487 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:48.487 LIB libspdk_accel_ioat.a 00:01:48.487 SO libspdk_scheduler_dynamic.so.4.0 00:01:48.487 SO libspdk_accel_iaa.so.3.0 00:01:48.487 SO libspdk_accel_error.so.2.0 00:01:48.487 LIB libspdk_blob_bdev.a 00:01:48.487 SYMLINK libspdk_scheduler_gscheduler.so 00:01:48.487 SO libspdk_accel_dsa.so.5.0 00:01:48.487 SO libspdk_accel_ioat.so.6.0 00:01:48.487 SYMLINK libspdk_keyring_file.so 00:01:48.487 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:48.487 SO libspdk_blob_bdev.so.11.0 00:01:48.487 SYMLINK libspdk_scheduler_dynamic.so 00:01:48.487 SYMLINK libspdk_accel_iaa.so 00:01:48.487 SYMLINK libspdk_accel_error.so 00:01:48.487 SYMLINK libspdk_accel_dsa.so 00:01:48.487 SYMLINK libspdk_accel_ioat.so 00:01:48.487 SYMLINK libspdk_blob_bdev.so 00:01:48.747 LIB libspdk_sock_posix.a 00:01:48.747 SO libspdk_sock_posix.so.6.0 00:01:48.747 SYMLINK libspdk_sock_posix.so 00:01:49.005 CC module/blobfs/bdev/blobfs_bdev.o 00:01:49.005 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:49.005 CC module/bdev/null/bdev_null_rpc.o 00:01:49.005 CC module/bdev/null/bdev_null.o 00:01:49.005 CC module/bdev/gpt/gpt.o 00:01:49.005 CC module/bdev/gpt/vbdev_gpt.o 00:01:49.005 CC module/bdev/malloc/bdev_malloc.o 00:01:49.005 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:49.005 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:49.005 CC module/bdev/delay/vbdev_delay.o 00:01:49.005 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:49.005 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:49.005 CC module/bdev/ftl/bdev_ftl.o 00:01:49.005 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:49.005 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:49.005 CC module/bdev/lvol/vbdev_lvol.o 00:01:49.005 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:49.005 CC module/bdev/iscsi/bdev_iscsi.o 00:01:49.005 CC module/bdev/split/vbdev_split.o 00:01:49.005 CC module/bdev/nvme/bdev_nvme.o 00:01:49.005 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:49.005 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:49.005 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:49.005 CC module/bdev/nvme/nvme_rpc.o 00:01:49.005 CC module/bdev/split/vbdev_split_rpc.o 00:01:49.005 CC module/bdev/nvme/bdev_mdns_client.o 00:01:49.005 CC module/bdev/nvme/vbdev_opal.o 00:01:49.005 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:49.005 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:49.005 CC module/bdev/error/vbdev_error.o 00:01:49.005 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:49.005 CC module/bdev/passthru/vbdev_passthru.o 00:01:49.005 CC module/bdev/error/vbdev_error_rpc.o 00:01:49.005 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:49.005 CC module/bdev/raid/bdev_raid.o 00:01:49.005 CC module/bdev/raid/bdev_raid_rpc.o 00:01:49.005 CC module/bdev/raid/raid1.o 00:01:49.005 CC module/bdev/raid/bdev_raid_sb.o 00:01:49.005 CC module/bdev/raid/raid0.o 00:01:49.005 CC module/bdev/aio/bdev_aio.o 00:01:49.005 CC module/bdev/raid/concat.o 00:01:49.005 CC module/bdev/aio/bdev_aio_rpc.o 00:01:49.263 LIB libspdk_blobfs_bdev.a 00:01:49.263 SO libspdk_blobfs_bdev.so.6.0 00:01:49.263 LIB libspdk_bdev_split.a 00:01:49.263 LIB libspdk_bdev_gpt.a 00:01:49.263 LIB libspdk_bdev_null.a 00:01:49.263 SO libspdk_bdev_gpt.so.6.0 00:01:49.263 LIB libspdk_bdev_ftl.a 00:01:49.263 SO libspdk_bdev_split.so.6.0 00:01:49.263 LIB libspdk_bdev_passthru.a 00:01:49.263 SO libspdk_bdev_null.so.6.0 00:01:49.263 SYMLINK libspdk_blobfs_bdev.so 00:01:49.263 LIB libspdk_bdev_error.a 00:01:49.263 SO libspdk_bdev_ftl.so.6.0 00:01:49.263 SO libspdk_bdev_passthru.so.6.0 00:01:49.263 SYMLINK libspdk_bdev_split.so 00:01:49.263 SYMLINK libspdk_bdev_gpt.so 00:01:49.263 LIB libspdk_bdev_zone_block.a 00:01:49.263 LIB libspdk_bdev_aio.a 00:01:49.263 LIB libspdk_bdev_malloc.a 00:01:49.263 LIB libspdk_bdev_iscsi.a 00:01:49.263 SO libspdk_bdev_error.so.6.0 00:01:49.263 SYMLINK libspdk_bdev_null.so 00:01:49.263 SO libspdk_bdev_malloc.so.6.0 00:01:49.263 SO libspdk_bdev_aio.so.6.0 00:01:49.263 SO libspdk_bdev_iscsi.so.6.0 00:01:49.263 SO libspdk_bdev_zone_block.so.6.0 00:01:49.263 LIB libspdk_bdev_delay.a 00:01:49.263 SYMLINK libspdk_bdev_ftl.so 00:01:49.263 SYMLINK libspdk_bdev_passthru.so 00:01:49.263 SYMLINK libspdk_bdev_error.so 00:01:49.263 SO libspdk_bdev_delay.so.6.0 00:01:49.263 SYMLINK libspdk_bdev_iscsi.so 00:01:49.263 SYMLINK libspdk_bdev_aio.so 00:01:49.263 SYMLINK libspdk_bdev_malloc.so 00:01:49.263 SYMLINK libspdk_bdev_zone_block.so 00:01:49.263 LIB libspdk_bdev_lvol.a 00:01:49.263 LIB libspdk_bdev_virtio.a 00:01:49.522 SYMLINK libspdk_bdev_delay.so 00:01:49.522 SO libspdk_bdev_lvol.so.6.0 00:01:49.522 SO libspdk_bdev_virtio.so.6.0 00:01:49.522 SYMLINK libspdk_bdev_lvol.so 00:01:49.522 SYMLINK libspdk_bdev_virtio.so 00:01:49.522 LIB libspdk_bdev_raid.a 00:01:49.781 SO libspdk_bdev_raid.so.6.0 00:01:49.781 SYMLINK libspdk_bdev_raid.so 00:01:50.351 LIB libspdk_bdev_nvme.a 00:01:50.351 SO libspdk_bdev_nvme.so.7.0 00:01:50.611 SYMLINK libspdk_bdev_nvme.so 00:01:51.181 CC module/event/subsystems/iobuf/iobuf.o 00:01:51.181 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:51.181 CC module/event/subsystems/keyring/keyring.o 00:01:51.181 CC module/event/subsystems/sock/sock.o 00:01:51.181 CC module/event/subsystems/scheduler/scheduler.o 00:01:51.181 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:51.181 CC module/event/subsystems/vmd/vmd.o 00:01:51.181 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:51.181 LIB libspdk_event_keyring.a 00:01:51.181 LIB libspdk_event_sock.a 00:01:51.181 LIB libspdk_event_vhost_blk.a 00:01:51.181 LIB libspdk_event_iobuf.a 00:01:51.181 LIB libspdk_event_scheduler.a 00:01:51.181 LIB libspdk_event_vmd.a 00:01:51.181 SO libspdk_event_keyring.so.1.0 00:01:51.181 SO libspdk_event_sock.so.5.0 00:01:51.181 SO libspdk_event_vhost_blk.so.3.0 00:01:51.181 SO libspdk_event_scheduler.so.4.0 00:01:51.181 SO libspdk_event_iobuf.so.3.0 00:01:51.181 SO libspdk_event_vmd.so.6.0 00:01:51.442 SYMLINK libspdk_event_keyring.so 00:01:51.442 SYMLINK libspdk_event_sock.so 00:01:51.442 SYMLINK libspdk_event_vhost_blk.so 00:01:51.442 SYMLINK libspdk_event_scheduler.so 00:01:51.442 SYMLINK libspdk_event_iobuf.so 00:01:51.442 SYMLINK libspdk_event_vmd.so 00:01:51.701 CC module/event/subsystems/accel/accel.o 00:01:51.701 LIB libspdk_event_accel.a 00:01:51.701 SO libspdk_event_accel.so.6.0 00:01:51.962 SYMLINK libspdk_event_accel.so 00:01:52.223 CC module/event/subsystems/bdev/bdev.o 00:01:52.223 LIB libspdk_event_bdev.a 00:01:52.223 SO libspdk_event_bdev.so.6.0 00:01:52.483 SYMLINK libspdk_event_bdev.so 00:01:52.743 CC module/event/subsystems/scsi/scsi.o 00:01:52.743 CC module/event/subsystems/ublk/ublk.o 00:01:52.743 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:52.743 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:52.743 CC module/event/subsystems/nbd/nbd.o 00:01:52.743 LIB libspdk_event_nbd.a 00:01:52.743 LIB libspdk_event_ublk.a 00:01:52.743 LIB libspdk_event_scsi.a 00:01:52.743 SO libspdk_event_nbd.so.6.0 00:01:53.004 SO libspdk_event_ublk.so.3.0 00:01:53.004 SO libspdk_event_scsi.so.6.0 00:01:53.004 LIB libspdk_event_nvmf.a 00:01:53.004 SYMLINK libspdk_event_nbd.so 00:01:53.004 SYMLINK libspdk_event_ublk.so 00:01:53.004 SO libspdk_event_nvmf.so.6.0 00:01:53.004 SYMLINK libspdk_event_scsi.so 00:01:53.004 SYMLINK libspdk_event_nvmf.so 00:01:53.263 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:53.263 CC module/event/subsystems/iscsi/iscsi.o 00:01:53.263 LIB libspdk_event_iscsi.a 00:01:53.263 LIB libspdk_event_vhost_scsi.a 00:01:53.524 SO libspdk_event_iscsi.so.6.0 00:01:53.524 SO libspdk_event_vhost_scsi.so.3.0 00:01:53.524 SYMLINK libspdk_event_iscsi.so 00:01:53.524 SYMLINK libspdk_event_vhost_scsi.so 00:01:53.524 SO libspdk.so.6.0 00:01:53.524 SYMLINK libspdk.so 00:01:54.097 CC app/spdk_nvme_identify/identify.o 00:01:54.097 CC app/spdk_lspci/spdk_lspci.o 00:01:54.098 CC app/trace_record/trace_record.o 00:01:54.098 CC app/spdk_nvme_perf/perf.o 00:01:54.098 CXX app/trace/trace.o 00:01:54.098 CC test/rpc_client/rpc_client_test.o 00:01:54.098 TEST_HEADER include/spdk/accel.h 00:01:54.098 TEST_HEADER include/spdk/assert.h 00:01:54.098 TEST_HEADER include/spdk/accel_module.h 00:01:54.098 TEST_HEADER include/spdk/barrier.h 00:01:54.098 TEST_HEADER include/spdk/base64.h 00:01:54.098 CC app/spdk_nvme_discover/discovery_aer.o 00:01:54.098 TEST_HEADER include/spdk/bdev.h 00:01:54.098 TEST_HEADER include/spdk/bdev_module.h 00:01:54.098 TEST_HEADER include/spdk/bdev_zone.h 00:01:54.098 TEST_HEADER include/spdk/bit_array.h 00:01:54.098 CC app/spdk_top/spdk_top.o 00:01:54.098 CC app/spdk_dd/spdk_dd.o 00:01:54.098 TEST_HEADER include/spdk/blob_bdev.h 00:01:54.098 TEST_HEADER include/spdk/bit_pool.h 00:01:54.098 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:54.098 CC app/vhost/vhost.o 00:01:54.098 TEST_HEADER include/spdk/blobfs.h 00:01:54.098 TEST_HEADER include/spdk/blob.h 00:01:54.098 TEST_HEADER include/spdk/conf.h 00:01:54.098 TEST_HEADER include/spdk/config.h 00:01:54.098 TEST_HEADER include/spdk/cpuset.h 00:01:54.098 TEST_HEADER include/spdk/crc16.h 00:01:54.098 TEST_HEADER include/spdk/crc32.h 00:01:54.098 TEST_HEADER include/spdk/crc64.h 00:01:54.098 TEST_HEADER include/spdk/dif.h 00:01:54.098 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:54.098 TEST_HEADER include/spdk/endian.h 00:01:54.098 TEST_HEADER include/spdk/dma.h 00:01:54.098 TEST_HEADER include/spdk/env.h 00:01:54.098 TEST_HEADER include/spdk/event.h 00:01:54.098 TEST_HEADER include/spdk/fd_group.h 00:01:54.098 TEST_HEADER include/spdk/env_dpdk.h 00:01:54.098 TEST_HEADER include/spdk/fd.h 00:01:54.098 TEST_HEADER include/spdk/file.h 00:01:54.098 TEST_HEADER include/spdk/ftl.h 00:01:54.098 TEST_HEADER include/spdk/gpt_spec.h 00:01:54.098 TEST_HEADER include/spdk/hexlify.h 00:01:54.098 CC app/iscsi_tgt/iscsi_tgt.o 00:01:54.098 TEST_HEADER include/spdk/idxd_spec.h 00:01:54.098 TEST_HEADER include/spdk/idxd.h 00:01:54.098 TEST_HEADER include/spdk/histogram_data.h 00:01:54.098 TEST_HEADER include/spdk/init.h 00:01:54.098 TEST_HEADER include/spdk/ioat.h 00:01:54.098 TEST_HEADER include/spdk/ioat_spec.h 00:01:54.098 TEST_HEADER include/spdk/json.h 00:01:54.098 TEST_HEADER include/spdk/jsonrpc.h 00:01:54.098 TEST_HEADER include/spdk/iscsi_spec.h 00:01:54.098 TEST_HEADER include/spdk/likely.h 00:01:54.098 TEST_HEADER include/spdk/keyring_module.h 00:01:54.098 TEST_HEADER include/spdk/keyring.h 00:01:54.098 CC app/nvmf_tgt/nvmf_main.o 00:01:54.098 TEST_HEADER include/spdk/log.h 00:01:54.098 TEST_HEADER include/spdk/lvol.h 00:01:54.098 TEST_HEADER include/spdk/memory.h 00:01:54.098 TEST_HEADER include/spdk/mmio.h 00:01:54.098 TEST_HEADER include/spdk/nbd.h 00:01:54.098 TEST_HEADER include/spdk/nvme.h 00:01:54.098 TEST_HEADER include/spdk/notify.h 00:01:54.098 TEST_HEADER include/spdk/nvme_intel.h 00:01:54.098 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:54.098 TEST_HEADER include/spdk/nvme_spec.h 00:01:54.098 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:54.098 TEST_HEADER include/spdk/nvme_zns.h 00:01:54.098 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:54.098 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:54.098 TEST_HEADER include/spdk/nvmf_spec.h 00:01:54.098 TEST_HEADER include/spdk/nvmf_transport.h 00:01:54.098 TEST_HEADER include/spdk/nvmf.h 00:01:54.098 TEST_HEADER include/spdk/opal.h 00:01:54.098 CC app/spdk_tgt/spdk_tgt.o 00:01:54.098 TEST_HEADER include/spdk/queue.h 00:01:54.098 TEST_HEADER include/spdk/reduce.h 00:01:54.098 TEST_HEADER include/spdk/pci_ids.h 00:01:54.098 TEST_HEADER include/spdk/opal_spec.h 00:01:54.098 TEST_HEADER include/spdk/pipe.h 00:01:54.098 TEST_HEADER include/spdk/rpc.h 00:01:54.098 TEST_HEADER include/spdk/scheduler.h 00:01:54.098 TEST_HEADER include/spdk/scsi.h 00:01:54.098 TEST_HEADER include/spdk/stdinc.h 00:01:54.098 TEST_HEADER include/spdk/sock.h 00:01:54.098 TEST_HEADER include/spdk/scsi_spec.h 00:01:54.098 TEST_HEADER include/spdk/thread.h 00:01:54.098 TEST_HEADER include/spdk/string.h 00:01:54.098 TEST_HEADER include/spdk/trace.h 00:01:54.098 TEST_HEADER include/spdk/trace_parser.h 00:01:54.098 TEST_HEADER include/spdk/tree.h 00:01:54.098 TEST_HEADER include/spdk/ublk.h 00:01:54.098 TEST_HEADER include/spdk/util.h 00:01:54.098 TEST_HEADER include/spdk/uuid.h 00:01:54.098 TEST_HEADER include/spdk/version.h 00:01:54.098 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:54.098 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:54.098 TEST_HEADER include/spdk/vhost.h 00:01:54.098 TEST_HEADER include/spdk/vmd.h 00:01:54.098 TEST_HEADER include/spdk/xor.h 00:01:54.098 TEST_HEADER include/spdk/zipf.h 00:01:54.098 CXX test/cpp_headers/accel_module.o 00:01:54.098 CXX test/cpp_headers/accel.o 00:01:54.098 CXX test/cpp_headers/assert.o 00:01:54.098 CXX test/cpp_headers/base64.o 00:01:54.098 CXX test/cpp_headers/barrier.o 00:01:54.098 CXX test/cpp_headers/bdev.o 00:01:54.098 CXX test/cpp_headers/bdev_zone.o 00:01:54.098 CXX test/cpp_headers/bit_pool.o 00:01:54.098 CXX test/cpp_headers/bdev_module.o 00:01:54.098 CXX test/cpp_headers/bit_array.o 00:01:54.098 CXX test/cpp_headers/blobfs_bdev.o 00:01:54.098 CXX test/cpp_headers/blob_bdev.o 00:01:54.098 CXX test/cpp_headers/blobfs.o 00:01:54.098 CXX test/cpp_headers/conf.o 00:01:54.098 CXX test/cpp_headers/blob.o 00:01:54.098 CXX test/cpp_headers/config.o 00:01:54.098 CXX test/cpp_headers/cpuset.o 00:01:54.098 CXX test/cpp_headers/crc64.o 00:01:54.098 CXX test/cpp_headers/crc32.o 00:01:54.098 CXX test/cpp_headers/crc16.o 00:01:54.098 CXX test/cpp_headers/dif.o 00:01:54.098 CXX test/cpp_headers/dma.o 00:01:54.098 CXX test/cpp_headers/endian.o 00:01:54.098 CXX test/cpp_headers/env.o 00:01:54.098 CXX test/cpp_headers/env_dpdk.o 00:01:54.098 CXX test/cpp_headers/event.o 00:01:54.098 CXX test/cpp_headers/fd_group.o 00:01:54.098 CXX test/cpp_headers/file.o 00:01:54.098 CXX test/cpp_headers/ftl.o 00:01:54.098 CXX test/cpp_headers/gpt_spec.o 00:01:54.098 CXX test/cpp_headers/fd.o 00:01:54.098 CXX test/cpp_headers/hexlify.o 00:01:54.098 CXX test/cpp_headers/histogram_data.o 00:01:54.098 CXX test/cpp_headers/idxd_spec.o 00:01:54.098 CXX test/cpp_headers/idxd.o 00:01:54.098 CXX test/cpp_headers/init.o 00:01:54.098 CXX test/cpp_headers/ioat.o 00:01:54.098 CC examples/idxd/perf/perf.o 00:01:54.098 CC examples/nvme/arbitration/arbitration.o 00:01:54.098 CC examples/nvme/reconnect/reconnect.o 00:01:54.098 CC examples/accel/perf/accel_perf.o 00:01:54.098 CC app/fio/nvme/fio_plugin.o 00:01:54.098 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:54.098 CXX test/cpp_headers/ioat_spec.o 00:01:54.098 CC examples/nvme/abort/abort.o 00:01:54.098 CC examples/vmd/led/led.o 00:01:54.098 CC test/event/event_perf/event_perf.o 00:01:54.098 CC test/env/memory/memory_ut.o 00:01:54.098 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:54.098 CC examples/vmd/lsvmd/lsvmd.o 00:01:54.098 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:54.098 CC examples/nvme/hello_world/hello_world.o 00:01:54.098 CC examples/nvme/hotplug/hotplug.o 00:01:54.098 CC test/nvme/reset/reset.o 00:01:54.098 CC app/fio/bdev/fio_plugin.o 00:01:54.098 CC test/env/vtophys/vtophys.o 00:01:54.098 CC examples/sock/hello_world/hello_sock.o 00:01:54.098 CC test/env/pci/pci_ut.o 00:01:54.098 CC test/event/reactor_perf/reactor_perf.o 00:01:54.098 CC test/app/histogram_perf/histogram_perf.o 00:01:54.098 CC examples/nvmf/nvmf/nvmf.o 00:01:54.098 CC examples/ioat/verify/verify.o 00:01:54.098 CC test/nvme/e2edp/nvme_dp.o 00:01:54.098 CC test/nvme/cuse/cuse.o 00:01:54.098 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:54.379 CC test/nvme/sgl/sgl.o 00:01:54.379 CC test/nvme/reserve/reserve.o 00:01:54.379 CC test/dma/test_dma/test_dma.o 00:01:54.379 CC examples/util/zipf/zipf.o 00:01:54.379 CC test/nvme/boot_partition/boot_partition.o 00:01:54.379 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:54.379 CC test/nvme/aer/aer.o 00:01:54.379 CC test/nvme/fdp/fdp.o 00:01:54.379 CC test/app/jsoncat/jsoncat.o 00:01:54.379 CC examples/bdev/bdevperf/bdevperf.o 00:01:54.379 CC test/nvme/startup/startup.o 00:01:54.379 CC test/event/app_repeat/app_repeat.o 00:01:54.379 CC examples/ioat/perf/perf.o 00:01:54.379 CC test/nvme/err_injection/err_injection.o 00:01:54.379 CC test/thread/poller_perf/poller_perf.o 00:01:54.379 CC test/event/reactor/reactor.o 00:01:54.379 CC test/app/stub/stub.o 00:01:54.379 CC examples/bdev/hello_world/hello_bdev.o 00:01:54.379 CC test/nvme/fused_ordering/fused_ordering.o 00:01:54.379 CC test/blobfs/mkfs/mkfs.o 00:01:54.379 CC test/nvme/simple_copy/simple_copy.o 00:01:54.379 CC test/nvme/overhead/overhead.o 00:01:54.379 CC examples/thread/thread/thread_ex.o 00:01:54.379 CC test/bdev/bdevio/bdevio.o 00:01:54.379 CC test/app/bdev_svc/bdev_svc.o 00:01:54.379 CC test/event/scheduler/scheduler.o 00:01:54.379 CC test/nvme/connect_stress/connect_stress.o 00:01:54.379 CC examples/blob/cli/blobcli.o 00:01:54.379 CC examples/blob/hello_world/hello_blob.o 00:01:54.379 LINK spdk_lspci 00:01:54.379 CC test/accel/dif/dif.o 00:01:54.379 CC test/nvme/compliance/nvme_compliance.o 00:01:54.379 LINK interrupt_tgt 00:01:54.379 LINK iscsi_tgt 00:01:54.648 LINK rpc_client_test 00:01:54.648 LINK nvmf_tgt 00:01:54.648 CC test/env/mem_callbacks/mem_callbacks.o 00:01:54.648 LINK spdk_trace_record 00:01:54.648 LINK vhost 00:01:54.648 LINK event_perf 00:01:54.648 CC test/lvol/esnap/esnap.o 00:01:54.648 LINK led 00:01:54.648 LINK env_dpdk_post_init 00:01:54.648 LINK lsvmd 00:01:54.648 LINK vtophys 00:01:54.648 LINK reactor_perf 00:01:54.648 LINK spdk_nvme_discover 00:01:54.648 LINK spdk_tgt 00:01:54.648 CXX test/cpp_headers/iscsi_spec.o 00:01:54.648 LINK zipf 00:01:54.648 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:54.648 LINK reactor 00:01:54.648 LINK app_repeat 00:01:54.648 CXX test/cpp_headers/json.o 00:01:54.648 CXX test/cpp_headers/jsonrpc.o 00:01:54.648 CXX test/cpp_headers/keyring.o 00:01:54.648 CXX test/cpp_headers/keyring_module.o 00:01:54.648 CXX test/cpp_headers/likely.o 00:01:54.648 CXX test/cpp_headers/lvol.o 00:01:54.648 CXX test/cpp_headers/memory.o 00:01:54.649 CXX test/cpp_headers/log.o 00:01:54.649 CXX test/cpp_headers/mmio.o 00:01:54.649 CXX test/cpp_headers/nbd.o 00:01:54.649 CXX test/cpp_headers/notify.o 00:01:54.649 CXX test/cpp_headers/nvme.o 00:01:54.649 LINK stub 00:01:54.649 CXX test/cpp_headers/nvme_intel.o 00:01:54.649 CXX test/cpp_headers/nvme_ocssd.o 00:01:54.649 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:54.649 LINK hello_world 00:01:54.649 LINK startup 00:01:54.649 CXX test/cpp_headers/nvme_spec.o 00:01:54.649 CXX test/cpp_headers/nvme_zns.o 00:01:54.649 LINK mkfs 00:01:54.649 LINK connect_stress 00:01:54.649 CXX test/cpp_headers/nvmf_cmd.o 00:01:54.649 LINK reset 00:01:54.911 LINK scheduler 00:01:54.911 LINK nvme_dp 00:01:54.911 LINK boot_partition 00:01:54.911 LINK simple_copy 00:01:54.911 LINK jsoncat 00:01:54.911 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:54.911 LINK thread 00:01:54.911 LINK pmr_persistence 00:01:54.911 LINK histogram_perf 00:01:54.911 LINK poller_perf 00:01:54.911 LINK doorbell_aers 00:01:54.911 LINK idxd_perf 00:01:54.911 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:54.911 LINK cmb_copy 00:01:54.911 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:54.911 LINK err_injection 00:01:54.911 CXX test/cpp_headers/nvmf.o 00:01:54.911 CXX test/cpp_headers/nvmf_spec.o 00:01:54.911 LINK overhead 00:01:54.911 LINK nvmf 00:01:54.911 LINK bdev_svc 00:01:54.911 CXX test/cpp_headers/nvmf_transport.o 00:01:54.911 CXX test/cpp_headers/opal.o 00:01:54.911 LINK verify 00:01:54.911 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:54.911 CXX test/cpp_headers/opal_spec.o 00:01:54.911 LINK ioat_perf 00:01:54.911 LINK reserve 00:01:54.911 CXX test/cpp_headers/pci_ids.o 00:01:54.911 CXX test/cpp_headers/pipe.o 00:01:54.911 LINK fused_ordering 00:01:54.911 CXX test/cpp_headers/queue.o 00:01:54.911 CXX test/cpp_headers/reduce.o 00:01:54.911 CXX test/cpp_headers/rpc.o 00:01:54.911 LINK hotplug 00:01:54.911 CXX test/cpp_headers/scheduler.o 00:01:54.911 CXX test/cpp_headers/scsi.o 00:01:54.911 CXX test/cpp_headers/scsi_spec.o 00:01:54.911 CXX test/cpp_headers/sock.o 00:01:54.911 CXX test/cpp_headers/stdinc.o 00:01:54.911 LINK spdk_dd 00:01:54.911 CXX test/cpp_headers/thread.o 00:01:54.911 CXX test/cpp_headers/string.o 00:01:54.911 CXX test/cpp_headers/trace.o 00:01:54.911 LINK hello_bdev 00:01:54.911 LINK nvme_compliance 00:01:54.911 CXX test/cpp_headers/trace_parser.o 00:01:54.911 CXX test/cpp_headers/tree.o 00:01:54.911 LINK sgl 00:01:54.911 CXX test/cpp_headers/util.o 00:01:54.911 CXX test/cpp_headers/ublk.o 00:01:54.911 LINK hello_sock 00:01:54.911 CXX test/cpp_headers/uuid.o 00:01:54.912 CXX test/cpp_headers/version.o 00:01:54.912 CXX test/cpp_headers/vfio_user_pci.o 00:01:54.912 LINK bdevio 00:01:54.912 CXX test/cpp_headers/vfio_user_spec.o 00:01:54.912 CXX test/cpp_headers/vhost.o 00:01:54.912 CXX test/cpp_headers/xor.o 00:01:54.912 CXX test/cpp_headers/vmd.o 00:01:54.912 LINK hello_blob 00:01:54.912 LINK aer 00:01:54.912 CXX test/cpp_headers/zipf.o 00:01:54.912 LINK nvme_manage 00:01:55.172 LINK fdp 00:01:55.172 LINK arbitration 00:01:55.172 LINK reconnect 00:01:55.172 LINK abort 00:01:55.172 LINK pci_ut 00:01:55.172 LINK test_dma 00:01:55.172 LINK spdk_trace 00:01:55.172 LINK accel_perf 00:01:55.172 LINK dif 00:01:55.172 LINK blobcli 00:01:55.172 LINK spdk_nvme_perf 00:01:55.172 LINK nvme_fuzz 00:01:55.172 LINK spdk_top 00:01:55.431 LINK spdk_bdev 00:01:55.432 LINK mem_callbacks 00:01:55.432 LINK spdk_nvme 00:01:55.432 LINK spdk_nvme_identify 00:01:55.432 LINK vhost_fuzz 00:01:55.432 LINK memory_ut 00:01:55.432 LINK bdevperf 00:01:55.691 LINK cuse 00:01:56.261 LINK iscsi_fuzz 00:01:57.642 LINK esnap 00:01:57.903 00:01:57.903 real 0m41.597s 00:01:57.903 user 5m49.682s 00:01:57.903 sys 3m31.834s 00:01:57.903 03:53:12 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:57.903 03:53:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.903 ************************************ 00:01:57.903 END TEST make 00:01:57.903 ************************************ 00:01:57.903 03:53:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:57.903 03:53:12 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:57.903 03:53:12 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:57.903 03:53:12 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.903 03:53:12 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:57.903 03:53:12 -- pm/common@45 -- $ pid=5729 00:01:57.903 03:53:12 -- pm/common@52 -- $ sudo kill -TERM 5729 00:01:57.903 03:53:12 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.903 03:53:12 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:57.903 03:53:12 -- pm/common@45 -- $ pid=5732 00:01:57.903 03:53:12 -- pm/common@52 -- $ sudo kill -TERM 5732 00:01:58.163 03:53:12 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.163 03:53:12 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:58.163 03:53:12 -- pm/common@45 -- $ pid=5735 00:01:58.163 03:53:12 -- pm/common@52 -- $ sudo kill -TERM 5735 00:01:58.163 03:53:12 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.163 03:53:12 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:58.163 03:53:12 -- pm/common@45 -- $ pid=5736 00:01:58.163 03:53:12 -- pm/common@52 -- $ sudo kill -TERM 5736 00:01:58.163 03:53:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:01:58.163 03:53:12 -- nvmf/common.sh@7 -- # uname -s 00:01:58.163 03:53:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:58.163 03:53:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:58.163 03:53:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:58.163 03:53:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:58.163 03:53:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:58.163 03:53:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:58.163 03:53:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:58.163 03:53:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:58.163 03:53:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:58.163 03:53:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:58.163 03:53:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:01:58.164 03:53:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:01:58.164 03:53:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:58.164 03:53:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:58.164 03:53:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:58.164 03:53:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:58.164 03:53:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:58.164 03:53:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:58.164 03:53:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:58.164 03:53:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:58.164 03:53:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.164 03:53:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.164 03:53:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.164 03:53:12 -- paths/export.sh@5 -- # export PATH 00:01:58.164 03:53:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.164 03:53:12 -- nvmf/common.sh@47 -- # : 0 00:01:58.164 03:53:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:58.164 03:53:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:58.164 03:53:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:58.164 03:53:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:58.164 03:53:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:58.164 03:53:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:58.164 03:53:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:58.164 03:53:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:58.164 03:53:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:58.164 03:53:12 -- spdk/autotest.sh@32 -- # uname -s 00:01:58.164 03:53:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:58.164 03:53:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:58.164 03:53:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:01:58.164 03:53:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:58.164 03:53:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:01:58.164 03:53:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:58.424 03:53:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:58.424 03:53:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:58.424 03:53:12 -- spdk/autotest.sh@48 -- # udevadm_pid=64879 00:01:58.424 03:53:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:58.424 03:53:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:58.424 03:53:12 -- pm/common@17 -- # local monitor 00:01:58.424 03:53:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.424 03:53:12 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=64882 00:01:58.424 03:53:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.424 03:53:12 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=64884 00:01:58.424 03:53:12 -- pm/common@21 -- # date +%s 00:01:58.424 03:53:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.424 03:53:12 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=64886 00:01:58.424 03:53:12 -- pm/common@21 -- # date +%s 00:01:58.424 03:53:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.424 03:53:12 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=64892 00:01:58.424 03:53:12 -- pm/common@26 -- # sleep 1 00:01:58.424 03:53:12 -- pm/common@21 -- # date +%s 00:01:58.424 03:53:12 -- pm/common@21 -- # date +%s 00:01:58.424 03:53:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713491592 00:01:58.424 03:53:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713491592 00:01:58.424 03:53:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713491592 00:01:58.424 03:53:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713491592 00:01:58.424 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713491592_collect-vmstat.pm.log 00:01:58.424 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713491592_collect-cpu-temp.pm.log 00:01:58.424 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713491592_collect-bmc-pm.bmc.pm.log 00:01:58.424 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713491592_collect-cpu-load.pm.log 00:01:59.364 03:53:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:59.364 03:53:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:59.364 03:53:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:59.364 03:53:13 -- common/autotest_common.sh@10 -- # set +x 00:01:59.364 03:53:13 -- spdk/autotest.sh@59 -- # create_test_list 00:01:59.364 03:53:13 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:59.364 03:53:13 -- common/autotest_common.sh@10 -- # set +x 00:01:59.364 03:53:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:01:59.364 03:53:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:59.364 03:53:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:59.364 03:53:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:59.364 03:53:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:59.364 03:53:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:59.364 03:53:13 -- common/autotest_common.sh@1441 -- # uname 00:01:59.364 03:53:13 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:59.364 03:53:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:59.364 03:53:13 -- common/autotest_common.sh@1461 -- # uname 00:01:59.364 03:53:13 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:59.364 03:53:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:59.364 03:53:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:59.364 03:53:13 -- spdk/autotest.sh@72 -- # hash lcov 00:01:59.364 03:53:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:59.364 03:53:13 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:59.365 --rc lcov_branch_coverage=1 00:01:59.365 --rc lcov_function_coverage=1 00:01:59.365 --rc genhtml_branch_coverage=1 00:01:59.365 --rc genhtml_function_coverage=1 00:01:59.365 --rc genhtml_legend=1 00:01:59.365 --rc geninfo_all_blocks=1 00:01:59.365 ' 00:01:59.365 03:53:13 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:59.365 --rc lcov_branch_coverage=1 00:01:59.365 --rc lcov_function_coverage=1 00:01:59.365 --rc genhtml_branch_coverage=1 00:01:59.365 --rc genhtml_function_coverage=1 00:01:59.365 --rc genhtml_legend=1 00:01:59.365 --rc geninfo_all_blocks=1 00:01:59.365 ' 00:01:59.365 03:53:13 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:59.365 --rc lcov_branch_coverage=1 00:01:59.365 --rc lcov_function_coverage=1 00:01:59.365 --rc genhtml_branch_coverage=1 00:01:59.365 --rc genhtml_function_coverage=1 00:01:59.365 --rc genhtml_legend=1 00:01:59.365 --rc geninfo_all_blocks=1 00:01:59.365 --no-external' 00:01:59.365 03:53:13 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:59.365 --rc lcov_branch_coverage=1 00:01:59.365 --rc lcov_function_coverage=1 00:01:59.365 --rc genhtml_branch_coverage=1 00:01:59.365 --rc genhtml_function_coverage=1 00:01:59.365 --rc genhtml_legend=1 00:01:59.365 --rc geninfo_all_blocks=1 00:01:59.365 --no-external' 00:01:59.365 03:53:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:59.365 lcov: LCOV version 1.14 00:01:59.365 03:53:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:07.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:07.498 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:08.437 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:08.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:08.437 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:08.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:08.437 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:08.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:20.656 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:20.656 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:20.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:20.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:20.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:20.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:20.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:20.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:20.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:20.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:20.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:20.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:20.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:20.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:20.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:20.658 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:20.658 03:53:35 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:20.658 03:53:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:20.658 03:53:35 -- common/autotest_common.sh@10 -- # set +x 00:02:20.658 03:53:35 -- spdk/autotest.sh@91 -- # rm -f 00:02:20.658 03:53:35 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:23.956 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:23.956 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:25.338 03:53:39 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:25.338 03:53:39 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:25.338 03:53:39 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:25.338 03:53:39 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:25.338 03:53:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:25.338 03:53:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:25.338 03:53:39 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:25.338 03:53:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:25.338 03:53:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:25.338 03:53:39 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:25.338 03:53:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:25.338 03:53:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:25.338 03:53:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:25.338 03:53:39 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:25.338 03:53:39 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:25.338 No valid GPT data, bailing 00:02:25.338 03:53:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:25.338 03:53:39 -- scripts/common.sh@391 -- # pt= 00:02:25.338 03:53:39 -- scripts/common.sh@392 -- # return 1 00:02:25.338 03:53:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:25.338 1+0 records in 00:02:25.338 1+0 records out 00:02:25.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00258073 s, 406 MB/s 00:02:25.338 03:53:39 -- spdk/autotest.sh@118 -- # sync 00:02:25.338 03:53:39 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:25.338 03:53:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:25.338 03:53:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:31.916 03:53:45 -- spdk/autotest.sh@124 -- # uname -s 00:02:31.916 03:53:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:31.916 03:53:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:31.916 03:53:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:31.916 03:53:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:31.916 03:53:45 -- common/autotest_common.sh@10 -- # set +x 00:02:31.916 ************************************ 00:02:31.916 START TEST setup.sh 00:02:31.916 ************************************ 00:02:31.916 03:53:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:31.916 * Looking for test storage... 00:02:31.916 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:31.916 03:53:45 -- setup/test-setup.sh@10 -- # uname -s 00:02:31.916 03:53:45 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:31.916 03:53:45 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:31.916 03:53:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:31.916 03:53:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:31.916 03:53:45 -- common/autotest_common.sh@10 -- # set +x 00:02:31.916 ************************************ 00:02:31.916 START TEST acl 00:02:31.916 ************************************ 00:02:31.916 03:53:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:31.916 * Looking for test storage... 00:02:31.916 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:31.916 03:53:45 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:31.916 03:53:45 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:31.916 03:53:45 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:31.916 03:53:45 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:31.916 03:53:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:31.916 03:53:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:31.916 03:53:45 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:31.916 03:53:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:31.916 03:53:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:31.916 03:53:45 -- setup/acl.sh@12 -- # devs=() 00:02:31.916 03:53:45 -- setup/acl.sh@12 -- # declare -a devs 00:02:31.916 03:53:45 -- setup/acl.sh@13 -- # drivers=() 00:02:31.916 03:53:45 -- setup/acl.sh@13 -- # declare -A drivers 00:02:31.916 03:53:45 -- setup/acl.sh@51 -- # setup reset 00:02:31.916 03:53:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:31.916 03:53:45 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:36.116 03:53:50 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:36.116 03:53:50 -- setup/acl.sh@16 -- # local dev driver 00:02:36.116 03:53:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.116 03:53:50 -- setup/acl.sh@15 -- # setup output status 00:02:36.116 03:53:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.116 03:53:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:38.657 Hugepages 00:02:38.657 node hugesize free / total 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 00:02:38.657 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # continue 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:38.657 03:53:52 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:38.657 03:53:52 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:38.657 03:53:52 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:38.657 03:53:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.657 03:53:52 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:38.657 03:53:52 -- setup/acl.sh@54 -- # run_test denied denied 00:02:38.657 03:53:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:38.657 03:53:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:38.657 03:53:52 -- common/autotest_common.sh@10 -- # set +x 00:02:38.657 ************************************ 00:02:38.657 START TEST denied 00:02:38.657 ************************************ 00:02:38.657 03:53:53 -- common/autotest_common.sh@1111 -- # denied 00:02:38.657 03:53:53 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:38.657 03:53:53 -- setup/acl.sh@38 -- # setup output config 00:02:38.657 03:53:53 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:38.657 03:53:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.657 03:53:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:42.853 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:42.853 03:53:57 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:42.853 03:53:57 -- setup/acl.sh@28 -- # local dev driver 00:02:42.853 03:53:57 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:42.853 03:53:57 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:42.853 03:53:57 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:42.853 03:53:57 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:42.853 03:53:57 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:42.853 03:53:57 -- setup/acl.sh@41 -- # setup reset 00:02:42.853 03:53:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:42.853 03:53:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.430 00:02:49.430 real 0m9.545s 00:02:49.430 user 0m3.159s 00:02:49.430 sys 0m5.533s 00:02:49.430 03:54:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:49.430 03:54:02 -- common/autotest_common.sh@10 -- # set +x 00:02:49.430 ************************************ 00:02:49.430 END TEST denied 00:02:49.430 ************************************ 00:02:49.430 03:54:02 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:49.430 03:54:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:49.430 03:54:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:49.430 03:54:02 -- common/autotest_common.sh@10 -- # set +x 00:02:49.430 ************************************ 00:02:49.430 START TEST allowed 00:02:49.430 ************************************ 00:02:49.430 03:54:02 -- common/autotest_common.sh@1111 -- # allowed 00:02:49.430 03:54:02 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:49.430 03:54:02 -- setup/acl.sh@45 -- # setup output config 00:02:49.430 03:54:02 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:49.430 03:54:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.430 03:54:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:56.007 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:56.007 03:54:10 -- setup/acl.sh@47 -- # verify 00:02:56.007 03:54:10 -- setup/acl.sh@28 -- # local dev driver 00:02:56.007 03:54:10 -- setup/acl.sh@48 -- # setup reset 00:02:56.007 03:54:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:56.007 03:54:10 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.217 00:03:00.217 real 0m11.677s 00:03:00.217 user 0m3.081s 00:03:00.217 sys 0m5.367s 00:03:00.217 03:54:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:00.217 03:54:14 -- common/autotest_common.sh@10 -- # set +x 00:03:00.217 ************************************ 00:03:00.217 END TEST allowed 00:03:00.217 ************************************ 00:03:00.217 00:03:00.217 real 0m28.852s 00:03:00.217 user 0m9.066s 00:03:00.217 sys 0m15.863s 00:03:00.217 03:54:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:00.217 03:54:14 -- common/autotest_common.sh@10 -- # set +x 00:03:00.217 ************************************ 00:03:00.217 END TEST acl 00:03:00.217 ************************************ 00:03:00.217 03:54:14 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:00.217 03:54:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:00.217 03:54:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:00.217 03:54:14 -- common/autotest_common.sh@10 -- # set +x 00:03:00.217 ************************************ 00:03:00.217 START TEST hugepages 00:03:00.217 ************************************ 00:03:00.217 03:54:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:00.478 * Looking for test storage... 00:03:00.478 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:00.478 03:54:14 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:00.478 03:54:14 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:00.478 03:54:14 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:00.478 03:54:14 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:00.478 03:54:14 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:00.478 03:54:14 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:00.478 03:54:14 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:00.478 03:54:14 -- setup/common.sh@18 -- # local node= 00:03:00.478 03:54:14 -- setup/common.sh@19 -- # local var val 00:03:00.478 03:54:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.478 03:54:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.478 03:54:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.478 03:54:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.478 03:54:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.478 03:54:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.478 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.478 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 77333220 kB' 'MemAvailable: 80784916 kB' 'Buffers: 9460 kB' 'Cached: 8822616 kB' 'SwapCached: 0 kB' 'Active: 6137928 kB' 'Inactive: 3400272 kB' 'Active(anon): 5593076 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 709440 kB' 'Mapped: 145384 kB' 'Shmem: 4886952 kB' 'KReclaimable: 205104 kB' 'Slab: 607476 kB' 'SReclaimable: 205104 kB' 'SUnreclaim: 402372 kB' 'KernelStack: 22624 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52949052 kB' 'Committed_AS: 7899976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214744 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.479 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.479 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # continue 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.480 03:54:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.480 03:54:14 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.480 03:54:14 -- setup/common.sh@33 -- # echo 2048 00:03:00.480 03:54:14 -- setup/common.sh@33 -- # return 0 00:03:00.480 03:54:14 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:00.480 03:54:14 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:00.480 03:54:14 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:00.480 03:54:14 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:00.480 03:54:14 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:00.480 03:54:14 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:00.480 03:54:14 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:00.480 03:54:14 -- setup/hugepages.sh@207 -- # get_nodes 00:03:00.480 03:54:14 -- setup/hugepages.sh@27 -- # local node 00:03:00.480 03:54:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.480 03:54:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:00.480 03:54:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.480 03:54:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:00.480 03:54:14 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.480 03:54:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.480 03:54:14 -- setup/hugepages.sh@208 -- # clear_hp 00:03:00.480 03:54:14 -- setup/hugepages.sh@37 -- # local node hp 00:03:00.480 03:54:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:00.480 03:54:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.480 03:54:14 -- setup/hugepages.sh@41 -- # echo 0 00:03:00.480 03:54:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.480 03:54:14 -- setup/hugepages.sh@41 -- # echo 0 00:03:00.480 03:54:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:00.480 03:54:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.480 03:54:14 -- setup/hugepages.sh@41 -- # echo 0 00:03:00.480 03:54:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.480 03:54:14 -- setup/hugepages.sh@41 -- # echo 0 00:03:00.480 03:54:14 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:00.480 03:54:14 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:00.480 03:54:14 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:00.480 03:54:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:00.480 03:54:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:00.480 03:54:14 -- common/autotest_common.sh@10 -- # set +x 00:03:00.480 ************************************ 00:03:00.480 START TEST default_setup 00:03:00.480 ************************************ 00:03:00.480 03:54:14 -- common/autotest_common.sh@1111 -- # default_setup 00:03:00.480 03:54:14 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:00.480 03:54:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:00.480 03:54:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:00.480 03:54:14 -- setup/hugepages.sh@51 -- # shift 00:03:00.480 03:54:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:00.480 03:54:15 -- setup/hugepages.sh@52 -- # local node_ids 00:03:00.480 03:54:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.480 03:54:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:00.480 03:54:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:00.480 03:54:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:00.480 03:54:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.740 03:54:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.740 03:54:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.740 03:54:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.740 03:54:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.740 03:54:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:00.740 03:54:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:00.740 03:54:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:00.740 03:54:15 -- setup/hugepages.sh@73 -- # return 0 00:03:00.740 03:54:15 -- setup/hugepages.sh@137 -- # setup output 00:03:00.740 03:54:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.740 03:54:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:03.281 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:03.281 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:03.281 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:03.281 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:03.541 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:06.839 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:08.280 03:54:22 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:08.280 03:54:22 -- setup/hugepages.sh@89 -- # local node 00:03:08.280 03:54:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.280 03:54:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.280 03:54:22 -- setup/hugepages.sh@92 -- # local surp 00:03:08.280 03:54:22 -- setup/hugepages.sh@93 -- # local resv 00:03:08.280 03:54:22 -- setup/hugepages.sh@94 -- # local anon 00:03:08.280 03:54:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.280 03:54:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.280 03:54:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.280 03:54:22 -- setup/common.sh@18 -- # local node= 00:03:08.280 03:54:22 -- setup/common.sh@19 -- # local var val 00:03:08.280 03:54:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.280 03:54:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.280 03:54:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.280 03:54:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.280 03:54:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.280 03:54:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79523600 kB' 'MemAvailable: 82974824 kB' 'Buffers: 9460 kB' 'Cached: 8823032 kB' 'SwapCached: 0 kB' 'Active: 6159028 kB' 'Inactive: 3400272 kB' 'Active(anon): 5614176 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 729820 kB' 'Mapped: 145496 kB' 'Shmem: 4887368 kB' 'KReclaimable: 204160 kB' 'Slab: 605700 kB' 'SReclaimable: 204160 kB' 'SUnreclaim: 401540 kB' 'KernelStack: 22464 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7914300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214632 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.280 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.280 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.281 03:54:22 -- setup/common.sh@33 -- # echo 0 00:03:08.281 03:54:22 -- setup/common.sh@33 -- # return 0 00:03:08.281 03:54:22 -- setup/hugepages.sh@97 -- # anon=0 00:03:08.281 03:54:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.281 03:54:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.281 03:54:22 -- setup/common.sh@18 -- # local node= 00:03:08.281 03:54:22 -- setup/common.sh@19 -- # local var val 00:03:08.281 03:54:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.281 03:54:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.281 03:54:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.281 03:54:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.281 03:54:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.281 03:54:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.281 03:54:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79524668 kB' 'MemAvailable: 82975872 kB' 'Buffers: 9460 kB' 'Cached: 8823036 kB' 'SwapCached: 0 kB' 'Active: 6158804 kB' 'Inactive: 3400272 kB' 'Active(anon): 5613952 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 730036 kB' 'Mapped: 145404 kB' 'Shmem: 4887372 kB' 'KReclaimable: 204120 kB' 'Slab: 605580 kB' 'SReclaimable: 204120 kB' 'SUnreclaim: 401460 kB' 'KernelStack: 22448 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7914308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214632 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.281 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.281 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.282 03:54:22 -- setup/common.sh@33 -- # echo 0 00:03:08.282 03:54:22 -- setup/common.sh@33 -- # return 0 00:03:08.282 03:54:22 -- setup/hugepages.sh@99 -- # surp=0 00:03:08.282 03:54:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.282 03:54:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.282 03:54:22 -- setup/common.sh@18 -- # local node= 00:03:08.282 03:54:22 -- setup/common.sh@19 -- # local var val 00:03:08.282 03:54:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.282 03:54:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.282 03:54:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.282 03:54:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.282 03:54:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.282 03:54:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79524668 kB' 'MemAvailable: 82975872 kB' 'Buffers: 9460 kB' 'Cached: 8823044 kB' 'SwapCached: 0 kB' 'Active: 6159028 kB' 'Inactive: 3400272 kB' 'Active(anon): 5614176 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 730224 kB' 'Mapped: 145404 kB' 'Shmem: 4887380 kB' 'KReclaimable: 204120 kB' 'Slab: 605580 kB' 'SReclaimable: 204120 kB' 'SUnreclaim: 401460 kB' 'KernelStack: 22432 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7927612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214648 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.282 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.282 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.283 03:54:22 -- setup/common.sh@33 -- # echo 0 00:03:08.283 03:54:22 -- setup/common.sh@33 -- # return 0 00:03:08.283 03:54:22 -- setup/hugepages.sh@100 -- # resv=0 00:03:08.283 03:54:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.283 nr_hugepages=1024 00:03:08.283 03:54:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.283 resv_hugepages=0 00:03:08.283 03:54:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.283 surplus_hugepages=0 00:03:08.283 03:54:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.283 anon_hugepages=0 00:03:08.283 03:54:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.283 03:54:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.283 03:54:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.283 03:54:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.283 03:54:22 -- setup/common.sh@18 -- # local node= 00:03:08.283 03:54:22 -- setup/common.sh@19 -- # local var val 00:03:08.283 03:54:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.283 03:54:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.283 03:54:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.283 03:54:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.283 03:54:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.283 03:54:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79524928 kB' 'MemAvailable: 82976132 kB' 'Buffers: 9460 kB' 'Cached: 8823064 kB' 'SwapCached: 0 kB' 'Active: 6158388 kB' 'Inactive: 3400272 kB' 'Active(anon): 5613536 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 729576 kB' 'Mapped: 145404 kB' 'Shmem: 4887400 kB' 'KReclaimable: 204120 kB' 'Slab: 605580 kB' 'SReclaimable: 204120 kB' 'SUnreclaim: 401460 kB' 'KernelStack: 22400 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7913972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214600 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.283 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.283 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.284 03:54:22 -- setup/common.sh@33 -- # echo 1024 00:03:08.284 03:54:22 -- setup/common.sh@33 -- # return 0 00:03:08.284 03:54:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.284 03:54:22 -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.284 03:54:22 -- setup/hugepages.sh@27 -- # local node 00:03:08.284 03:54:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.284 03:54:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:08.284 03:54:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.284 03:54:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.284 03:54:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.284 03:54:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.284 03:54:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.284 03:54:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.284 03:54:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.284 03:54:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.284 03:54:22 -- setup/common.sh@18 -- # local node=0 00:03:08.284 03:54:22 -- setup/common.sh@19 -- # local var val 00:03:08.284 03:54:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.284 03:54:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.284 03:54:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.284 03:54:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.284 03:54:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.284 03:54:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32581956 kB' 'MemFree: 26592920 kB' 'MemUsed: 5989036 kB' 'SwapCached: 0 kB' 'Active: 2246824 kB' 'Inactive: 152144 kB' 'Active(anon): 1879076 kB' 'Inactive(anon): 0 kB' 'Active(file): 367748 kB' 'Inactive(file): 152144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1905092 kB' 'Mapped: 109536 kB' 'AnonPages: 497212 kB' 'Shmem: 1385200 kB' 'KernelStack: 12648 kB' 'PageTables: 5044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136620 kB' 'Slab: 401520 kB' 'SReclaimable: 136620 kB' 'SUnreclaim: 264900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.284 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.284 03:54:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # continue 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.285 03:54:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.285 03:54:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.285 03:54:22 -- setup/common.sh@33 -- # echo 0 00:03:08.285 03:54:22 -- setup/common.sh@33 -- # return 0 00:03:08.285 03:54:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.285 03:54:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.285 03:54:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.285 03:54:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.285 03:54:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:08.285 node0=1024 expecting 1024 00:03:08.285 03:54:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:08.285 00:03:08.285 real 0m7.683s 00:03:08.285 user 0m1.695s 00:03:08.285 sys 0m2.725s 00:03:08.285 03:54:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:08.285 03:54:22 -- common/autotest_common.sh@10 -- # set +x 00:03:08.285 ************************************ 00:03:08.285 END TEST default_setup 00:03:08.285 ************************************ 00:03:08.285 03:54:22 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:08.285 03:54:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.285 03:54:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.285 03:54:22 -- common/autotest_common.sh@10 -- # set +x 00:03:08.602 ************************************ 00:03:08.602 START TEST per_node_1G_alloc 00:03:08.602 ************************************ 00:03:08.602 03:54:22 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:08.602 03:54:22 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:08.602 03:54:22 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:08.602 03:54:22 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:08.602 03:54:22 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:08.602 03:54:22 -- setup/hugepages.sh@51 -- # shift 00:03:08.602 03:54:22 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:08.602 03:54:22 -- setup/hugepages.sh@52 -- # local node_ids 00:03:08.602 03:54:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.602 03:54:22 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:08.602 03:54:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:08.602 03:54:22 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:08.602 03:54:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.602 03:54:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:08.602 03:54:22 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.602 03:54:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.602 03:54:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.602 03:54:22 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:08.602 03:54:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.602 03:54:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:08.602 03:54:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.602 03:54:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:08.602 03:54:22 -- setup/hugepages.sh@73 -- # return 0 00:03:08.602 03:54:22 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:08.602 03:54:22 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:08.602 03:54:22 -- setup/hugepages.sh@146 -- # setup output 00:03:08.602 03:54:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.602 03:54:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:11.141 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.141 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.141 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.141 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.141 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.141 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.141 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.141 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.141 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.141 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.142 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.142 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.142 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.142 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.142 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.142 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.142 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.527 03:54:26 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:12.527 03:54:26 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:12.527 03:54:26 -- setup/hugepages.sh@89 -- # local node 00:03:12.527 03:54:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.527 03:54:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.527 03:54:26 -- setup/hugepages.sh@92 -- # local surp 00:03:12.527 03:54:26 -- setup/hugepages.sh@93 -- # local resv 00:03:12.527 03:54:26 -- setup/hugepages.sh@94 -- # local anon 00:03:12.527 03:54:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.527 03:54:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.527 03:54:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.527 03:54:26 -- setup/common.sh@18 -- # local node= 00:03:12.527 03:54:26 -- setup/common.sh@19 -- # local var val 00:03:12.527 03:54:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.527 03:54:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.527 03:54:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.527 03:54:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.527 03:54:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.527 03:54:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79518316 kB' 'MemAvailable: 82970016 kB' 'Buffers: 9460 kB' 'Cached: 8823444 kB' 'SwapCached: 0 kB' 'Active: 6162828 kB' 'Inactive: 3400272 kB' 'Active(anon): 5617976 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 733644 kB' 'Mapped: 144520 kB' 'Shmem: 4887780 kB' 'KReclaimable: 204104 kB' 'Slab: 606300 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 402196 kB' 'KernelStack: 22432 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7907036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214680 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.527 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.527 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.528 03:54:26 -- setup/common.sh@33 -- # echo 0 00:03:12.528 03:54:26 -- setup/common.sh@33 -- # return 0 00:03:12.528 03:54:26 -- setup/hugepages.sh@97 -- # anon=0 00:03:12.528 03:54:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.528 03:54:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.528 03:54:26 -- setup/common.sh@18 -- # local node= 00:03:12.528 03:54:26 -- setup/common.sh@19 -- # local var val 00:03:12.528 03:54:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.528 03:54:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.528 03:54:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.528 03:54:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.528 03:54:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.528 03:54:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79527316 kB' 'MemAvailable: 82978512 kB' 'Buffers: 9460 kB' 'Cached: 8823448 kB' 'SwapCached: 0 kB' 'Active: 6162704 kB' 'Inactive: 3400272 kB' 'Active(anon): 5617852 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 733520 kB' 'Mapped: 144588 kB' 'Shmem: 4887784 kB' 'KReclaimable: 204104 kB' 'Slab: 606196 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 402092 kB' 'KernelStack: 22416 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7907048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214648 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.528 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.528 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.529 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.529 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.530 03:54:26 -- setup/common.sh@33 -- # echo 0 00:03:12.530 03:54:26 -- setup/common.sh@33 -- # return 0 00:03:12.530 03:54:26 -- setup/hugepages.sh@99 -- # surp=0 00:03:12.530 03:54:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.530 03:54:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.530 03:54:26 -- setup/common.sh@18 -- # local node= 00:03:12.530 03:54:26 -- setup/common.sh@19 -- # local var val 00:03:12.530 03:54:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.530 03:54:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.530 03:54:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.530 03:54:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.530 03:54:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.530 03:54:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79528628 kB' 'MemAvailable: 82979824 kB' 'Buffers: 9460 kB' 'Cached: 8823456 kB' 'SwapCached: 0 kB' 'Active: 6162400 kB' 'Inactive: 3400272 kB' 'Active(anon): 5617548 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 733164 kB' 'Mapped: 144508 kB' 'Shmem: 4887792 kB' 'KReclaimable: 204104 kB' 'Slab: 606176 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 402072 kB' 'KernelStack: 22384 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7907064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214632 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.530 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.530 03:54:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.531 03:54:26 -- setup/common.sh@33 -- # echo 0 00:03:12.531 03:54:26 -- setup/common.sh@33 -- # return 0 00:03:12.531 03:54:26 -- setup/hugepages.sh@100 -- # resv=0 00:03:12.531 03:54:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.531 nr_hugepages=1024 00:03:12.531 03:54:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.531 resv_hugepages=0 00:03:12.531 03:54:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.531 surplus_hugepages=0 00:03:12.531 03:54:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.531 anon_hugepages=0 00:03:12.531 03:54:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.531 03:54:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.531 03:54:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.531 03:54:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.531 03:54:27 -- setup/common.sh@18 -- # local node= 00:03:12.531 03:54:27 -- setup/common.sh@19 -- # local var val 00:03:12.531 03:54:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.531 03:54:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.531 03:54:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.531 03:54:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.531 03:54:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.531 03:54:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79530000 kB' 'MemAvailable: 82981196 kB' 'Buffers: 9460 kB' 'Cached: 8823472 kB' 'SwapCached: 0 kB' 'Active: 6162692 kB' 'Inactive: 3400272 kB' 'Active(anon): 5617840 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 733436 kB' 'Mapped: 144508 kB' 'Shmem: 4887808 kB' 'KReclaimable: 204104 kB' 'Slab: 606176 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 402072 kB' 'KernelStack: 22400 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7908224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214584 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.531 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.531 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.532 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.532 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.533 03:54:27 -- setup/common.sh@33 -- # echo 1024 00:03:12.533 03:54:27 -- setup/common.sh@33 -- # return 0 00:03:12.533 03:54:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.533 03:54:27 -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.533 03:54:27 -- setup/hugepages.sh@27 -- # local node 00:03:12.533 03:54:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.533 03:54:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.533 03:54:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.533 03:54:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.533 03:54:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.533 03:54:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.533 03:54:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.533 03:54:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.533 03:54:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.533 03:54:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.533 03:54:27 -- setup/common.sh@18 -- # local node=0 00:03:12.533 03:54:27 -- setup/common.sh@19 -- # local var val 00:03:12.533 03:54:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.533 03:54:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.533 03:54:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.533 03:54:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.533 03:54:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.533 03:54:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32581956 kB' 'MemFree: 27646268 kB' 'MemUsed: 4935688 kB' 'SwapCached: 0 kB' 'Active: 2249632 kB' 'Inactive: 152144 kB' 'Active(anon): 1881884 kB' 'Inactive(anon): 0 kB' 'Active(file): 367748 kB' 'Inactive(file): 152144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1905408 kB' 'Mapped: 108592 kB' 'AnonPages: 499560 kB' 'Shmem: 1385516 kB' 'KernelStack: 12616 kB' 'PageTables: 4856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136604 kB' 'Slab: 402008 kB' 'SReclaimable: 136604 kB' 'SUnreclaim: 265404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.533 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.533 03:54:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.534 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.534 03:54:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.534 03:54:27 -- setup/common.sh@33 -- # echo 0 00:03:12.534 03:54:27 -- setup/common.sh@33 -- # return 0 00:03:12.534 03:54:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.534 03:54:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.534 03:54:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.534 03:54:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:12.534 03:54:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.534 03:54:27 -- setup/common.sh@18 -- # local node=1 00:03:12.534 03:54:27 -- setup/common.sh@19 -- # local var val 00:03:12.795 03:54:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.795 03:54:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.795 03:54:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:12.795 03:54:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:12.795 03:54:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.795 03:54:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60733248 kB' 'MemFree: 51883996 kB' 'MemUsed: 8849252 kB' 'SwapCached: 0 kB' 'Active: 3913548 kB' 'Inactive: 3248128 kB' 'Active(anon): 3736444 kB' 'Inactive(anon): 0 kB' 'Active(file): 177104 kB' 'Inactive(file): 3248128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6927540 kB' 'Mapped: 35932 kB' 'AnonPages: 234364 kB' 'Shmem: 3502308 kB' 'KernelStack: 9784 kB' 'PageTables: 3044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67500 kB' 'Slab: 204168 kB' 'SReclaimable: 67500 kB' 'SUnreclaim: 136668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.795 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.795 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # continue 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.796 03:54:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.796 03:54:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.796 03:54:27 -- setup/common.sh@33 -- # echo 0 00:03:12.796 03:54:27 -- setup/common.sh@33 -- # return 0 00:03:12.796 03:54:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.796 03:54:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.796 03:54:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.796 03:54:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.796 03:54:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:12.796 node0=512 expecting 512 00:03:12.796 03:54:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.796 03:54:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.796 03:54:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.796 03:54:27 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:12.796 node1=512 expecting 512 00:03:12.796 03:54:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:12.796 00:03:12.796 real 0m4.229s 00:03:12.796 user 0m1.580s 00:03:12.796 sys 0m2.647s 00:03:12.796 03:54:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.796 03:54:27 -- common/autotest_common.sh@10 -- # set +x 00:03:12.796 ************************************ 00:03:12.796 END TEST per_node_1G_alloc 00:03:12.796 ************************************ 00:03:12.796 03:54:27 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:12.796 03:54:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.796 03:54:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.796 03:54:27 -- common/autotest_common.sh@10 -- # set +x 00:03:12.796 ************************************ 00:03:12.796 START TEST even_2G_alloc 00:03:12.796 ************************************ 00:03:12.796 03:54:27 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:12.796 03:54:27 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:12.796 03:54:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.796 03:54:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:12.796 03:54:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.796 03:54:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.796 03:54:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:12.796 03:54:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:12.796 03:54:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.796 03:54:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.796 03:54:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.796 03:54:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.796 03:54:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.796 03:54:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:12.796 03:54:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:12.796 03:54:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.796 03:54:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:12.796 03:54:27 -- setup/hugepages.sh@83 -- # : 512 00:03:12.796 03:54:27 -- setup/hugepages.sh@84 -- # : 1 00:03:12.796 03:54:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.796 03:54:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:12.796 03:54:27 -- setup/hugepages.sh@83 -- # : 0 00:03:12.796 03:54:27 -- setup/hugepages.sh@84 -- # : 0 00:03:12.796 03:54:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.796 03:54:27 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:12.796 03:54:27 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:12.796 03:54:27 -- setup/hugepages.sh@153 -- # setup output 00:03:12.796 03:54:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.797 03:54:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:16.090 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.090 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.035 03:54:31 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:17.035 03:54:31 -- setup/hugepages.sh@89 -- # local node 00:03:17.035 03:54:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.035 03:54:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.035 03:54:31 -- setup/hugepages.sh@92 -- # local surp 00:03:17.035 03:54:31 -- setup/hugepages.sh@93 -- # local resv 00:03:17.035 03:54:31 -- setup/hugepages.sh@94 -- # local anon 00:03:17.035 03:54:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.035 03:54:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.035 03:54:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.035 03:54:31 -- setup/common.sh@18 -- # local node= 00:03:17.035 03:54:31 -- setup/common.sh@19 -- # local var val 00:03:17.035 03:54:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.035 03:54:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.035 03:54:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.035 03:54:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.035 03:54:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.035 03:54:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79519184 kB' 'MemAvailable: 82970376 kB' 'Buffers: 9460 kB' 'Cached: 8823864 kB' 'SwapCached: 0 kB' 'Active: 6168028 kB' 'Inactive: 3400272 kB' 'Active(anon): 5623176 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 738488 kB' 'Mapped: 144676 kB' 'Shmem: 4888200 kB' 'KReclaimable: 204096 kB' 'Slab: 606836 kB' 'SReclaimable: 204096 kB' 'SUnreclaim: 402740 kB' 'KernelStack: 22368 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7908264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214440 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.035 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.035 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.036 03:54:31 -- setup/common.sh@33 -- # echo 0 00:03:17.036 03:54:31 -- setup/common.sh@33 -- # return 0 00:03:17.036 03:54:31 -- setup/hugepages.sh@97 -- # anon=0 00:03:17.036 03:54:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.036 03:54:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.036 03:54:31 -- setup/common.sh@18 -- # local node= 00:03:17.036 03:54:31 -- setup/common.sh@19 -- # local var val 00:03:17.036 03:54:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.036 03:54:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.036 03:54:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.036 03:54:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.036 03:54:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.036 03:54:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79527592 kB' 'MemAvailable: 82978784 kB' 'Buffers: 9460 kB' 'Cached: 8823868 kB' 'SwapCached: 0 kB' 'Active: 6167396 kB' 'Inactive: 3400272 kB' 'Active(anon): 5622544 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 737848 kB' 'Mapped: 144656 kB' 'Shmem: 4888204 kB' 'KReclaimable: 204096 kB' 'Slab: 606812 kB' 'SReclaimable: 204096 kB' 'SUnreclaim: 402716 kB' 'KernelStack: 22352 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7908276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214440 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.036 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.036 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.037 03:54:31 -- setup/common.sh@33 -- # echo 0 00:03:17.037 03:54:31 -- setup/common.sh@33 -- # return 0 00:03:17.037 03:54:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:17.037 03:54:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.037 03:54:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.037 03:54:31 -- setup/common.sh@18 -- # local node= 00:03:17.037 03:54:31 -- setup/common.sh@19 -- # local var val 00:03:17.037 03:54:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.037 03:54:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.037 03:54:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.037 03:54:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.037 03:54:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.037 03:54:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.037 03:54:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79529280 kB' 'MemAvailable: 82980472 kB' 'Buffers: 9460 kB' 'Cached: 8823880 kB' 'SwapCached: 0 kB' 'Active: 6167636 kB' 'Inactive: 3400272 kB' 'Active(anon): 5622784 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 738024 kB' 'Mapped: 144596 kB' 'Shmem: 4888216 kB' 'KReclaimable: 204096 kB' 'Slab: 606740 kB' 'SReclaimable: 204096 kB' 'SUnreclaim: 402644 kB' 'KernelStack: 22384 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7909440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214424 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.037 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.037 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.038 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.038 03:54:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.039 03:54:31 -- setup/common.sh@33 -- # echo 0 00:03:17.039 03:54:31 -- setup/common.sh@33 -- # return 0 00:03:17.039 03:54:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:17.039 03:54:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:17.039 nr_hugepages=1024 00:03:17.039 03:54:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.039 resv_hugepages=0 00:03:17.039 03:54:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.039 surplus_hugepages=0 00:03:17.039 03:54:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.039 anon_hugepages=0 00:03:17.039 03:54:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.039 03:54:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:17.039 03:54:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.039 03:54:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.039 03:54:31 -- setup/common.sh@18 -- # local node= 00:03:17.039 03:54:31 -- setup/common.sh@19 -- # local var val 00:03:17.039 03:54:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.039 03:54:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.039 03:54:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.039 03:54:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.039 03:54:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.039 03:54:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79529040 kB' 'MemAvailable: 82980232 kB' 'Buffers: 9460 kB' 'Cached: 8823892 kB' 'SwapCached: 0 kB' 'Active: 6167376 kB' 'Inactive: 3400272 kB' 'Active(anon): 5622524 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 737728 kB' 'Mapped: 144596 kB' 'Shmem: 4888228 kB' 'KReclaimable: 204096 kB' 'Slab: 606736 kB' 'SReclaimable: 204096 kB' 'SUnreclaim: 402640 kB' 'KernelStack: 22400 kB' 'PageTables: 7576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7909452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214408 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.039 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.039 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.040 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.040 03:54:31 -- setup/common.sh@33 -- # echo 1024 00:03:17.040 03:54:31 -- setup/common.sh@33 -- # return 0 00:03:17.040 03:54:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.040 03:54:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.040 03:54:31 -- setup/hugepages.sh@27 -- # local node 00:03:17.040 03:54:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.040 03:54:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.040 03:54:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.040 03:54:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.040 03:54:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.040 03:54:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.040 03:54:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.040 03:54:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.040 03:54:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.040 03:54:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.040 03:54:31 -- setup/common.sh@18 -- # local node=0 00:03:17.040 03:54:31 -- setup/common.sh@19 -- # local var val 00:03:17.040 03:54:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.040 03:54:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.040 03:54:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.040 03:54:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.040 03:54:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.040 03:54:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.040 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32581956 kB' 'MemFree: 27648268 kB' 'MemUsed: 4933688 kB' 'SwapCached: 0 kB' 'Active: 2254116 kB' 'Inactive: 152144 kB' 'Active(anon): 1886368 kB' 'Inactive(anon): 0 kB' 'Active(file): 367748 kB' 'Inactive(file): 152144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1905716 kB' 'Mapped: 108592 kB' 'AnonPages: 503852 kB' 'Shmem: 1385824 kB' 'KernelStack: 12856 kB' 'PageTables: 5208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136604 kB' 'Slab: 402252 kB' 'SReclaimable: 136604 kB' 'SUnreclaim: 265648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.041 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.041 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.041 03:54:31 -- setup/common.sh@33 -- # echo 0 00:03:17.041 03:54:31 -- setup/common.sh@33 -- # return 0 00:03:17.041 03:54:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.041 03:54:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.041 03:54:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.041 03:54:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.041 03:54:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.042 03:54:31 -- setup/common.sh@18 -- # local node=1 00:03:17.042 03:54:31 -- setup/common.sh@19 -- # local var val 00:03:17.042 03:54:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:17.042 03:54:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.042 03:54:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.042 03:54:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.042 03:54:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.042 03:54:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60733248 kB' 'MemFree: 51879488 kB' 'MemUsed: 8853760 kB' 'SwapCached: 0 kB' 'Active: 3913652 kB' 'Inactive: 3248128 kB' 'Active(anon): 3736548 kB' 'Inactive(anon): 0 kB' 'Active(file): 177104 kB' 'Inactive(file): 3248128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6927664 kB' 'Mapped: 36004 kB' 'AnonPages: 234192 kB' 'Shmem: 3502432 kB' 'KernelStack: 9736 kB' 'PageTables: 2844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67492 kB' 'Slab: 204480 kB' 'SReclaimable: 67492 kB' 'SUnreclaim: 136988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.042 03:54:31 -- setup/common.sh@32 -- # continue 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:17.042 03:54:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:17.043 03:54:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.043 03:54:31 -- setup/common.sh@33 -- # echo 0 00:03:17.043 03:54:31 -- setup/common.sh@33 -- # return 0 00:03:17.043 03:54:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.043 03:54:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.043 03:54:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.043 03:54:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.043 03:54:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:17.043 node0=512 expecting 512 00:03:17.043 03:54:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.043 03:54:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.043 03:54:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.043 03:54:31 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:17.043 node1=512 expecting 512 00:03:17.043 03:54:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:17.043 00:03:17.043 real 0m4.204s 00:03:17.043 user 0m1.716s 00:03:17.043 sys 0m2.531s 00:03:17.043 03:54:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:17.043 03:54:31 -- common/autotest_common.sh@10 -- # set +x 00:03:17.043 ************************************ 00:03:17.043 END TEST even_2G_alloc 00:03:17.043 ************************************ 00:03:17.043 03:54:31 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:17.043 03:54:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.043 03:54:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.043 03:54:31 -- common/autotest_common.sh@10 -- # set +x 00:03:17.302 ************************************ 00:03:17.302 START TEST odd_alloc 00:03:17.302 ************************************ 00:03:17.302 03:54:31 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:17.302 03:54:31 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:17.302 03:54:31 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:17.302 03:54:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.302 03:54:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.302 03:54:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:17.302 03:54:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.302 03:54:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.302 03:54:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.302 03:54:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:17.302 03:54:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.302 03:54:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.302 03:54:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.302 03:54:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.302 03:54:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:17.302 03:54:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.302 03:54:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:17.302 03:54:31 -- setup/hugepages.sh@83 -- # : 513 00:03:17.302 03:54:31 -- setup/hugepages.sh@84 -- # : 1 00:03:17.302 03:54:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.302 03:54:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:17.302 03:54:31 -- setup/hugepages.sh@83 -- # : 0 00:03:17.302 03:54:31 -- setup/hugepages.sh@84 -- # : 0 00:03:17.302 03:54:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.302 03:54:31 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:17.302 03:54:31 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:17.302 03:54:31 -- setup/hugepages.sh@160 -- # setup output 00:03:17.302 03:54:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.302 03:54:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:19.842 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.842 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.227 03:54:35 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:21.227 03:54:35 -- setup/hugepages.sh@89 -- # local node 00:03:21.227 03:54:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.227 03:54:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.227 03:54:35 -- setup/hugepages.sh@92 -- # local surp 00:03:21.227 03:54:35 -- setup/hugepages.sh@93 -- # local resv 00:03:21.227 03:54:35 -- setup/hugepages.sh@94 -- # local anon 00:03:21.227 03:54:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.228 03:54:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.228 03:54:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.228 03:54:35 -- setup/common.sh@18 -- # local node= 00:03:21.228 03:54:35 -- setup/common.sh@19 -- # local var val 00:03:21.228 03:54:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.228 03:54:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.228 03:54:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.228 03:54:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.228 03:54:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.228 03:54:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79538876 kB' 'MemAvailable: 82990164 kB' 'Buffers: 9460 kB' 'Cached: 8824280 kB' 'SwapCached: 0 kB' 'Active: 6173620 kB' 'Inactive: 3400272 kB' 'Active(anon): 5628768 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 743592 kB' 'Mapped: 144672 kB' 'Shmem: 4888616 kB' 'KReclaimable: 204288 kB' 'Slab: 606300 kB' 'SReclaimable: 204288 kB' 'SUnreclaim: 402012 kB' 'KernelStack: 22432 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996604 kB' 'Committed_AS: 7909348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214696 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.228 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.228 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.229 03:54:35 -- setup/common.sh@33 -- # echo 0 00:03:21.229 03:54:35 -- setup/common.sh@33 -- # return 0 00:03:21.229 03:54:35 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.229 03:54:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.229 03:54:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.229 03:54:35 -- setup/common.sh@18 -- # local node= 00:03:21.229 03:54:35 -- setup/common.sh@19 -- # local var val 00:03:21.229 03:54:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.229 03:54:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.229 03:54:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.229 03:54:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.229 03:54:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.229 03:54:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79540152 kB' 'MemAvailable: 82991440 kB' 'Buffers: 9460 kB' 'Cached: 8824284 kB' 'SwapCached: 0 kB' 'Active: 6174172 kB' 'Inactive: 3400272 kB' 'Active(anon): 5629320 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 744188 kB' 'Mapped: 144644 kB' 'Shmem: 4888620 kB' 'KReclaimable: 204288 kB' 'Slab: 606244 kB' 'SReclaimable: 204288 kB' 'SUnreclaim: 401956 kB' 'KernelStack: 22432 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996604 kB' 'Committed_AS: 7910508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214648 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.229 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.229 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.230 03:54:35 -- setup/common.sh@33 -- # echo 0 00:03:21.230 03:54:35 -- setup/common.sh@33 -- # return 0 00:03:21.230 03:54:35 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.230 03:54:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.230 03:54:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.230 03:54:35 -- setup/common.sh@18 -- # local node= 00:03:21.230 03:54:35 -- setup/common.sh@19 -- # local var val 00:03:21.230 03:54:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.230 03:54:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.230 03:54:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.230 03:54:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.230 03:54:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.230 03:54:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79542396 kB' 'MemAvailable: 82993684 kB' 'Buffers: 9460 kB' 'Cached: 8824296 kB' 'SwapCached: 0 kB' 'Active: 6173504 kB' 'Inactive: 3400272 kB' 'Active(anon): 5628652 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 743540 kB' 'Mapped: 144660 kB' 'Shmem: 4888632 kB' 'KReclaimable: 204288 kB' 'Slab: 606336 kB' 'SReclaimable: 204288 kB' 'SUnreclaim: 402048 kB' 'KernelStack: 22352 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996604 kB' 'Committed_AS: 7910684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214648 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.230 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.230 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.231 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.231 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.232 03:54:35 -- setup/common.sh@33 -- # echo 0 00:03:21.232 03:54:35 -- setup/common.sh@33 -- # return 0 00:03:21.232 03:54:35 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.232 03:54:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:21.232 nr_hugepages=1025 00:03:21.232 03:54:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.232 resv_hugepages=0 00:03:21.232 03:54:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.232 surplus_hugepages=0 00:03:21.232 03:54:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.232 anon_hugepages=0 00:03:21.232 03:54:35 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:21.232 03:54:35 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:21.232 03:54:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.232 03:54:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.232 03:54:35 -- setup/common.sh@18 -- # local node= 00:03:21.232 03:54:35 -- setup/common.sh@19 -- # local var val 00:03:21.232 03:54:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.232 03:54:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.232 03:54:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.232 03:54:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.232 03:54:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.232 03:54:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.232 03:54:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79541856 kB' 'MemAvailable: 82993144 kB' 'Buffers: 9460 kB' 'Cached: 8824308 kB' 'SwapCached: 0 kB' 'Active: 6173948 kB' 'Inactive: 3400272 kB' 'Active(anon): 5629096 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 743908 kB' 'Mapped: 144660 kB' 'Shmem: 4888644 kB' 'KReclaimable: 204288 kB' 'Slab: 606336 kB' 'SReclaimable: 204288 kB' 'SUnreclaim: 402048 kB' 'KernelStack: 22496 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53996604 kB' 'Committed_AS: 7912056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214728 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.232 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.232 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.233 03:54:35 -- setup/common.sh@33 -- # echo 1025 00:03:21.233 03:54:35 -- setup/common.sh@33 -- # return 0 00:03:21.233 03:54:35 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:21.233 03:54:35 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.233 03:54:35 -- setup/hugepages.sh@27 -- # local node 00:03:21.233 03:54:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.233 03:54:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.233 03:54:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.233 03:54:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:21.233 03:54:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.233 03:54:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.233 03:54:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.233 03:54:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.233 03:54:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.233 03:54:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.233 03:54:35 -- setup/common.sh@18 -- # local node=0 00:03:21.233 03:54:35 -- setup/common.sh@19 -- # local var val 00:03:21.233 03:54:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.233 03:54:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.233 03:54:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.233 03:54:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.233 03:54:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.233 03:54:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32581956 kB' 'MemFree: 27637376 kB' 'MemUsed: 4944580 kB' 'SwapCached: 0 kB' 'Active: 2259136 kB' 'Inactive: 152144 kB' 'Active(anon): 1891388 kB' 'Inactive(anon): 0 kB' 'Active(file): 367748 kB' 'Inactive(file): 152144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1906000 kB' 'Mapped: 108592 kB' 'AnonPages: 508548 kB' 'Shmem: 1386108 kB' 'KernelStack: 12744 kB' 'PageTables: 5160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136796 kB' 'Slab: 402108 kB' 'SReclaimable: 136796 kB' 'SUnreclaim: 265312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.233 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.233 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@33 -- # echo 0 00:03:21.234 03:54:35 -- setup/common.sh@33 -- # return 0 00:03:21.234 03:54:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.234 03:54:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.234 03:54:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.234 03:54:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.234 03:54:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.234 03:54:35 -- setup/common.sh@18 -- # local node=1 00:03:21.234 03:54:35 -- setup/common.sh@19 -- # local var val 00:03:21.234 03:54:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.234 03:54:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.234 03:54:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.234 03:54:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.234 03:54:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.234 03:54:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60733248 kB' 'MemFree: 51902432 kB' 'MemUsed: 8830816 kB' 'SwapCached: 0 kB' 'Active: 3914640 kB' 'Inactive: 3248128 kB' 'Active(anon): 3737536 kB' 'Inactive(anon): 0 kB' 'Active(file): 177104 kB' 'Inactive(file): 3248128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6927780 kB' 'Mapped: 36068 kB' 'AnonPages: 235076 kB' 'Shmem: 3502548 kB' 'KernelStack: 9912 kB' 'PageTables: 2880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67492 kB' 'Slab: 204228 kB' 'SReclaimable: 67492 kB' 'SUnreclaim: 136736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.234 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.234 03:54:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.235 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.235 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.235 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.235 03:54:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.235 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.235 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.235 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.235 03:54:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.235 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.235 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.235 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.235 03:54:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.235 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.235 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.496 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.496 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.496 03:54:35 -- setup/common.sh@33 -- # echo 0 00:03:21.496 03:54:35 -- setup/common.sh@33 -- # return 0 00:03:21.496 03:54:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.496 03:54:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.496 03:54:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.496 03:54:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.496 03:54:35 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:21.497 node0=512 expecting 513 00:03:21.497 03:54:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.497 03:54:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.497 03:54:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.497 03:54:35 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:21.497 node1=513 expecting 512 00:03:21.497 03:54:35 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:21.497 00:03:21.497 real 0m4.154s 00:03:21.497 user 0m1.667s 00:03:21.497 sys 0m2.520s 00:03:21.497 03:54:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:21.497 03:54:35 -- common/autotest_common.sh@10 -- # set +x 00:03:21.497 ************************************ 00:03:21.497 END TEST odd_alloc 00:03:21.497 ************************************ 00:03:21.497 03:54:35 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:21.497 03:54:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.497 03:54:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.497 03:54:35 -- common/autotest_common.sh@10 -- # set +x 00:03:21.497 ************************************ 00:03:21.497 START TEST custom_alloc 00:03:21.497 ************************************ 00:03:21.497 03:54:35 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:21.497 03:54:35 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:21.497 03:54:35 -- setup/hugepages.sh@169 -- # local node 00:03:21.497 03:54:35 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:21.497 03:54:35 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:21.497 03:54:35 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:21.497 03:54:35 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:21.497 03:54:35 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:21.497 03:54:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:21.497 03:54:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.497 03:54:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.497 03:54:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.497 03:54:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:21.497 03:54:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.497 03:54:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.497 03:54:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.497 03:54:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:21.497 03:54:35 -- setup/hugepages.sh@83 -- # : 256 00:03:21.497 03:54:35 -- setup/hugepages.sh@84 -- # : 1 00:03:21.497 03:54:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:21.497 03:54:35 -- setup/hugepages.sh@83 -- # : 0 00:03:21.497 03:54:35 -- setup/hugepages.sh@84 -- # : 0 00:03:21.497 03:54:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:21.497 03:54:35 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:21.497 03:54:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.497 03:54:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.497 03:54:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.497 03:54:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.497 03:54:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.497 03:54:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.497 03:54:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.497 03:54:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.497 03:54:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.497 03:54:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.497 03:54:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:21.497 03:54:35 -- setup/hugepages.sh@78 -- # return 0 00:03:21.497 03:54:35 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:21.497 03:54:35 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:21.497 03:54:35 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:21.497 03:54:35 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:21.497 03:54:35 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:21.497 03:54:35 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:21.497 03:54:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.497 03:54:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.497 03:54:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.497 03:54:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.497 03:54:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.497 03:54:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.497 03:54:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:21.497 03:54:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.497 03:54:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:21.497 03:54:35 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.497 03:54:35 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:21.497 03:54:35 -- setup/hugepages.sh@78 -- # return 0 00:03:21.497 03:54:35 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:21.497 03:54:35 -- setup/hugepages.sh@187 -- # setup output 00:03:21.497 03:54:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.497 03:54:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:24.794 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.794 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.739 03:54:39 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:25.739 03:54:39 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:25.739 03:54:39 -- setup/hugepages.sh@89 -- # local node 00:03:25.739 03:54:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.739 03:54:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.739 03:54:39 -- setup/hugepages.sh@92 -- # local surp 00:03:25.739 03:54:39 -- setup/hugepages.sh@93 -- # local resv 00:03:25.739 03:54:39 -- setup/hugepages.sh@94 -- # local anon 00:03:25.739 03:54:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.739 03:54:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.739 03:54:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.739 03:54:39 -- setup/common.sh@18 -- # local node= 00:03:25.739 03:54:39 -- setup/common.sh@19 -- # local var val 00:03:25.740 03:54:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.740 03:54:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.740 03:54:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.740 03:54:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.740 03:54:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.740 03:54:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 78446912 kB' 'MemAvailable: 81898180 kB' 'Buffers: 9460 kB' 'Cached: 8824432 kB' 'SwapCached: 0 kB' 'Active: 6178836 kB' 'Inactive: 3400272 kB' 'Active(anon): 5633984 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748808 kB' 'Mapped: 144748 kB' 'Shmem: 4888768 kB' 'KReclaimable: 204248 kB' 'Slab: 606948 kB' 'SReclaimable: 204248 kB' 'SUnreclaim: 402700 kB' 'KernelStack: 22400 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53473340 kB' 'Committed_AS: 7911628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214680 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.740 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.741 03:54:39 -- setup/common.sh@33 -- # echo 0 00:03:25.741 03:54:39 -- setup/common.sh@33 -- # return 0 00:03:25.741 03:54:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:25.741 03:54:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.741 03:54:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.741 03:54:39 -- setup/common.sh@18 -- # local node= 00:03:25.741 03:54:39 -- setup/common.sh@19 -- # local var val 00:03:25.741 03:54:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.741 03:54:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.741 03:54:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.741 03:54:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.741 03:54:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.741 03:54:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 78447724 kB' 'MemAvailable: 81898992 kB' 'Buffers: 9460 kB' 'Cached: 8824436 kB' 'SwapCached: 0 kB' 'Active: 6178264 kB' 'Inactive: 3400272 kB' 'Active(anon): 5633412 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748196 kB' 'Mapped: 144728 kB' 'Shmem: 4888772 kB' 'KReclaimable: 204248 kB' 'Slab: 606940 kB' 'SReclaimable: 204248 kB' 'SUnreclaim: 402692 kB' 'KernelStack: 22464 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53473340 kB' 'Committed_AS: 7911476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214696 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:39 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.741 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 03:54:40 -- setup/common.sh@33 -- # echo 0 00:03:25.742 03:54:40 -- setup/common.sh@33 -- # return 0 00:03:25.742 03:54:40 -- setup/hugepages.sh@99 -- # surp=0 00:03:25.742 03:54:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.742 03:54:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.742 03:54:40 -- setup/common.sh@18 -- # local node= 00:03:25.742 03:54:40 -- setup/common.sh@19 -- # local var val 00:03:25.742 03:54:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.742 03:54:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.742 03:54:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.742 03:54:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.742 03:54:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.742 03:54:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 78452316 kB' 'MemAvailable: 81903584 kB' 'Buffers: 9460 kB' 'Cached: 8824440 kB' 'SwapCached: 0 kB' 'Active: 6179640 kB' 'Inactive: 3400272 kB' 'Active(anon): 5634788 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 749544 kB' 'Mapped: 144728 kB' 'Shmem: 4888776 kB' 'KReclaimable: 204248 kB' 'Slab: 606940 kB' 'SReclaimable: 204248 kB' 'SUnreclaim: 402692 kB' 'KernelStack: 22528 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53473340 kB' 'Committed_AS: 7912828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214840 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.742 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.742 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.743 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.743 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.744 03:54:40 -- setup/common.sh@33 -- # echo 0 00:03:25.744 03:54:40 -- setup/common.sh@33 -- # return 0 00:03:25.744 03:54:40 -- setup/hugepages.sh@100 -- # resv=0 00:03:25.744 03:54:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:25.744 nr_hugepages=1536 00:03:25.744 03:54:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.744 resv_hugepages=0 00:03:25.744 03:54:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.744 surplus_hugepages=0 00:03:25.744 03:54:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.744 anon_hugepages=0 00:03:25.744 03:54:40 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:25.744 03:54:40 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:25.744 03:54:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.744 03:54:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.744 03:54:40 -- setup/common.sh@18 -- # local node= 00:03:25.744 03:54:40 -- setup/common.sh@19 -- # local var val 00:03:25.744 03:54:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.744 03:54:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.744 03:54:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.744 03:54:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.744 03:54:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.744 03:54:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 78451228 kB' 'MemAvailable: 81902496 kB' 'Buffers: 9460 kB' 'Cached: 8824460 kB' 'SwapCached: 0 kB' 'Active: 6180012 kB' 'Inactive: 3400272 kB' 'Active(anon): 5635160 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 750340 kB' 'Mapped: 144728 kB' 'Shmem: 4888796 kB' 'KReclaimable: 204248 kB' 'Slab: 606780 kB' 'SReclaimable: 204248 kB' 'SUnreclaim: 402532 kB' 'KernelStack: 22544 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53473340 kB' 'Committed_AS: 7913024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214840 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.744 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.744 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.745 03:54:40 -- setup/common.sh@33 -- # echo 1536 00:03:25.745 03:54:40 -- setup/common.sh@33 -- # return 0 00:03:25.745 03:54:40 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:25.745 03:54:40 -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.745 03:54:40 -- setup/hugepages.sh@27 -- # local node 00:03:25.745 03:54:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.745 03:54:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.745 03:54:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.745 03:54:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:25.745 03:54:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.745 03:54:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.745 03:54:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.745 03:54:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.745 03:54:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.745 03:54:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.745 03:54:40 -- setup/common.sh@18 -- # local node=0 00:03:25.745 03:54:40 -- setup/common.sh@19 -- # local var val 00:03:25.745 03:54:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.745 03:54:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.745 03:54:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.745 03:54:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.745 03:54:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.745 03:54:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32581956 kB' 'MemFree: 27618912 kB' 'MemUsed: 4963044 kB' 'SwapCached: 0 kB' 'Active: 2264436 kB' 'Inactive: 152144 kB' 'Active(anon): 1896688 kB' 'Inactive(anon): 0 kB' 'Active(file): 367748 kB' 'Inactive(file): 152144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1906020 kB' 'Mapped: 108592 kB' 'AnonPages: 514248 kB' 'Shmem: 1386128 kB' 'KernelStack: 12600 kB' 'PageTables: 4976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136812 kB' 'Slab: 402328 kB' 'SReclaimable: 136812 kB' 'SUnreclaim: 265516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.745 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.745 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@33 -- # echo 0 00:03:25.746 03:54:40 -- setup/common.sh@33 -- # return 0 00:03:25.746 03:54:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.746 03:54:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.746 03:54:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.746 03:54:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.746 03:54:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.746 03:54:40 -- setup/common.sh@18 -- # local node=1 00:03:25.746 03:54:40 -- setup/common.sh@19 -- # local var val 00:03:25.746 03:54:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.746 03:54:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.746 03:54:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.746 03:54:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.746 03:54:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.746 03:54:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60733248 kB' 'MemFree: 50830948 kB' 'MemUsed: 9902300 kB' 'SwapCached: 0 kB' 'Active: 3915964 kB' 'Inactive: 3248128 kB' 'Active(anon): 3738860 kB' 'Inactive(anon): 0 kB' 'Active(file): 177104 kB' 'Inactive(file): 3248128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6927932 kB' 'Mapped: 36136 kB' 'AnonPages: 236260 kB' 'Shmem: 3502700 kB' 'KernelStack: 9944 kB' 'PageTables: 3180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67452 kB' 'Slab: 204420 kB' 'SReclaimable: 67452 kB' 'SUnreclaim: 136968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.746 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.746 03:54:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # continue 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.747 03:54:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.747 03:54:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.747 03:54:40 -- setup/common.sh@33 -- # echo 0 00:03:25.747 03:54:40 -- setup/common.sh@33 -- # return 0 00:03:25.747 03:54:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.747 03:54:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.747 03:54:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.747 03:54:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.747 03:54:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:25.747 node0=512 expecting 512 00:03:25.747 03:54:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.747 03:54:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.747 03:54:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.747 03:54:40 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:25.747 node1=1024 expecting 1024 00:03:25.747 03:54:40 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:25.747 00:03:25.747 real 0m4.234s 00:03:25.747 user 0m1.652s 00:03:25.747 sys 0m2.617s 00:03:25.747 03:54:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:25.747 03:54:40 -- common/autotest_common.sh@10 -- # set +x 00:03:25.747 ************************************ 00:03:25.747 END TEST custom_alloc 00:03:25.747 ************************************ 00:03:25.747 03:54:40 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:25.747 03:54:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:25.747 03:54:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:25.747 03:54:40 -- common/autotest_common.sh@10 -- # set +x 00:03:26.007 ************************************ 00:03:26.007 START TEST no_shrink_alloc 00:03:26.007 ************************************ 00:03:26.007 03:54:40 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:26.007 03:54:40 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:26.007 03:54:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.007 03:54:40 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.007 03:54:40 -- setup/hugepages.sh@51 -- # shift 00:03:26.007 03:54:40 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:26.007 03:54:40 -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.007 03:54:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.007 03:54:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.007 03:54:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:26.007 03:54:40 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:26.007 03:54:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.007 03:54:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.007 03:54:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.007 03:54:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.007 03:54:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.007 03:54:40 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:26.007 03:54:40 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.007 03:54:40 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:26.007 03:54:40 -- setup/hugepages.sh@73 -- # return 0 00:03:26.007 03:54:40 -- setup/hugepages.sh@198 -- # setup output 00:03:26.007 03:54:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.007 03:54:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:28.546 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.547 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.805 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.805 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.270 03:54:44 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:30.270 03:54:44 -- setup/hugepages.sh@89 -- # local node 00:03:30.270 03:54:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.270 03:54:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.270 03:54:44 -- setup/hugepages.sh@92 -- # local surp 00:03:30.270 03:54:44 -- setup/hugepages.sh@93 -- # local resv 00:03:30.270 03:54:44 -- setup/hugepages.sh@94 -- # local anon 00:03:30.270 03:54:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.270 03:54:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.270 03:54:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.270 03:54:44 -- setup/common.sh@18 -- # local node= 00:03:30.270 03:54:44 -- setup/common.sh@19 -- # local var val 00:03:30.270 03:54:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.270 03:54:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.270 03:54:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.270 03:54:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.270 03:54:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.270 03:54:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79467380 kB' 'MemAvailable: 82918576 kB' 'Buffers: 9460 kB' 'Cached: 8824580 kB' 'SwapCached: 0 kB' 'Active: 6184268 kB' 'Inactive: 3400272 kB' 'Active(anon): 5639416 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 753892 kB' 'Mapped: 144800 kB' 'Shmem: 4888916 kB' 'KReclaimable: 204104 kB' 'Slab: 606112 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 402008 kB' 'KernelStack: 22624 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7913460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214760 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.270 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.270 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.271 03:54:44 -- setup/common.sh@33 -- # echo 0 00:03:30.271 03:54:44 -- setup/common.sh@33 -- # return 0 00:03:30.271 03:54:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.271 03:54:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.271 03:54:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.271 03:54:44 -- setup/common.sh@18 -- # local node= 00:03:30.271 03:54:44 -- setup/common.sh@19 -- # local var val 00:03:30.271 03:54:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.271 03:54:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.271 03:54:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.271 03:54:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.271 03:54:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.271 03:54:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79469592 kB' 'MemAvailable: 82920788 kB' 'Buffers: 9460 kB' 'Cached: 8824584 kB' 'SwapCached: 0 kB' 'Active: 6184336 kB' 'Inactive: 3400272 kB' 'Active(anon): 5639484 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 754024 kB' 'Mapped: 144800 kB' 'Shmem: 4888920 kB' 'KReclaimable: 204104 kB' 'Slab: 606068 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 401964 kB' 'KernelStack: 22640 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7913600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214808 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.271 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.271 03:54:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.272 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.272 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.273 03:54:44 -- setup/common.sh@33 -- # echo 0 00:03:30.273 03:54:44 -- setup/common.sh@33 -- # return 0 00:03:30.273 03:54:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.273 03:54:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.273 03:54:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.273 03:54:44 -- setup/common.sh@18 -- # local node= 00:03:30.273 03:54:44 -- setup/common.sh@19 -- # local var val 00:03:30.273 03:54:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.273 03:54:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.273 03:54:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.273 03:54:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.273 03:54:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.273 03:54:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79467784 kB' 'MemAvailable: 82918980 kB' 'Buffers: 9460 kB' 'Cached: 8824596 kB' 'SwapCached: 0 kB' 'Active: 6184120 kB' 'Inactive: 3400272 kB' 'Active(anon): 5639268 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 753780 kB' 'Mapped: 144800 kB' 'Shmem: 4888932 kB' 'KReclaimable: 204104 kB' 'Slab: 606112 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 402008 kB' 'KernelStack: 22608 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7913804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214824 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.273 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.273 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.274 03:54:44 -- setup/common.sh@33 -- # echo 0 00:03:30.274 03:54:44 -- setup/common.sh@33 -- # return 0 00:03:30.274 03:54:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.274 03:54:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.274 nr_hugepages=1024 00:03:30.274 03:54:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.274 resv_hugepages=0 00:03:30.274 03:54:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.274 surplus_hugepages=0 00:03:30.274 03:54:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.274 anon_hugepages=0 00:03:30.274 03:54:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.274 03:54:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.274 03:54:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.274 03:54:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.274 03:54:44 -- setup/common.sh@18 -- # local node= 00:03:30.274 03:54:44 -- setup/common.sh@19 -- # local var val 00:03:30.274 03:54:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.274 03:54:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.274 03:54:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.274 03:54:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.274 03:54:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.274 03:54:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79465724 kB' 'MemAvailable: 82916920 kB' 'Buffers: 9460 kB' 'Cached: 8824624 kB' 'SwapCached: 0 kB' 'Active: 6184140 kB' 'Inactive: 3400272 kB' 'Active(anon): 5639288 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 753448 kB' 'Mapped: 144800 kB' 'Shmem: 4888960 kB' 'KReclaimable: 204104 kB' 'Slab: 606112 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 402008 kB' 'KernelStack: 22560 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7914000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214856 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.274 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.274 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.275 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.275 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.276 03:54:44 -- setup/common.sh@33 -- # echo 1024 00:03:30.276 03:54:44 -- setup/common.sh@33 -- # return 0 00:03:30.276 03:54:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.276 03:54:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.276 03:54:44 -- setup/hugepages.sh@27 -- # local node 00:03:30.276 03:54:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.276 03:54:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.276 03:54:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.276 03:54:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.276 03:54:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.276 03:54:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.276 03:54:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.276 03:54:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.276 03:54:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.276 03:54:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.276 03:54:44 -- setup/common.sh@18 -- # local node=0 00:03:30.276 03:54:44 -- setup/common.sh@19 -- # local var val 00:03:30.276 03:54:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.276 03:54:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.276 03:54:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.276 03:54:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.276 03:54:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.276 03:54:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32581956 kB' 'MemFree: 26555468 kB' 'MemUsed: 6026488 kB' 'SwapCached: 0 kB' 'Active: 2269320 kB' 'Inactive: 152144 kB' 'Active(anon): 1901572 kB' 'Inactive(anon): 0 kB' 'Active(file): 367748 kB' 'Inactive(file): 152144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1906076 kB' 'Mapped: 108592 kB' 'AnonPages: 519048 kB' 'Shmem: 1386184 kB' 'KernelStack: 12728 kB' 'PageTables: 5244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136716 kB' 'Slab: 401836 kB' 'SReclaimable: 136716 kB' 'SUnreclaim: 265120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.276 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.276 03:54:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # continue 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.277 03:54:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.277 03:54:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.277 03:54:44 -- setup/common.sh@33 -- # echo 0 00:03:30.277 03:54:44 -- setup/common.sh@33 -- # return 0 00:03:30.277 03:54:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.277 03:54:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.277 03:54:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.277 03:54:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.277 03:54:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.277 node0=1024 expecting 1024 00:03:30.277 03:54:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.277 03:54:44 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:30.277 03:54:44 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:30.277 03:54:44 -- setup/hugepages.sh@202 -- # setup output 00:03:30.277 03:54:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.277 03:54:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:32.817 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.817 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.203 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:34.203 03:54:48 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:34.203 03:54:48 -- setup/hugepages.sh@89 -- # local node 00:03:34.203 03:54:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.203 03:54:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.203 03:54:48 -- setup/hugepages.sh@92 -- # local surp 00:03:34.203 03:54:48 -- setup/hugepages.sh@93 -- # local resv 00:03:34.203 03:54:48 -- setup/hugepages.sh@94 -- # local anon 00:03:34.203 03:54:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.203 03:54:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.203 03:54:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.203 03:54:48 -- setup/common.sh@18 -- # local node= 00:03:34.203 03:54:48 -- setup/common.sh@19 -- # local var val 00:03:34.203 03:54:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.203 03:54:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.203 03:54:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.203 03:54:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.203 03:54:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.203 03:54:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.203 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.203 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.203 03:54:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79473188 kB' 'MemAvailable: 82924384 kB' 'Buffers: 9460 kB' 'Cached: 8824712 kB' 'SwapCached: 0 kB' 'Active: 6187888 kB' 'Inactive: 3400272 kB' 'Active(anon): 5643036 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 756864 kB' 'Mapped: 144916 kB' 'Shmem: 4889048 kB' 'KReclaimable: 204104 kB' 'Slab: 605936 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 401832 kB' 'KernelStack: 22416 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7911712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214632 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:34.203 03:54:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.203 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.203 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.203 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.203 03:54:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.203 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.203 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.203 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.204 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.204 03:54:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.204 03:54:48 -- setup/common.sh@33 -- # echo 0 00:03:34.204 03:54:48 -- setup/common.sh@33 -- # return 0 00:03:34.204 03:54:48 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.204 03:54:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.204 03:54:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.204 03:54:48 -- setup/common.sh@18 -- # local node= 00:03:34.204 03:54:48 -- setup/common.sh@19 -- # local var val 00:03:34.204 03:54:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.204 03:54:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.204 03:54:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.204 03:54:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.204 03:54:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.204 03:54:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79475424 kB' 'MemAvailable: 82926620 kB' 'Buffers: 9460 kB' 'Cached: 8824716 kB' 'SwapCached: 0 kB' 'Active: 6188108 kB' 'Inactive: 3400272 kB' 'Active(anon): 5643256 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 757128 kB' 'Mapped: 144900 kB' 'Shmem: 4889052 kB' 'KReclaimable: 204104 kB' 'Slab: 605952 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 401848 kB' 'KernelStack: 22400 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7911720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214600 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.205 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.205 03:54:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 03:54:48 -- setup/common.sh@33 -- # echo 0 00:03:34.206 03:54:48 -- setup/common.sh@33 -- # return 0 00:03:34.206 03:54:48 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.206 03:54:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.206 03:54:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.206 03:54:48 -- setup/common.sh@18 -- # local node= 00:03:34.206 03:54:48 -- setup/common.sh@19 -- # local var val 00:03:34.206 03:54:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.206 03:54:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.206 03:54:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.206 03:54:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.206 03:54:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.206 03:54:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79475652 kB' 'MemAvailable: 82926848 kB' 'Buffers: 9460 kB' 'Cached: 8824728 kB' 'SwapCached: 0 kB' 'Active: 6188124 kB' 'Inactive: 3400272 kB' 'Active(anon): 5643272 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 757612 kB' 'Mapped: 144840 kB' 'Shmem: 4889064 kB' 'KReclaimable: 204104 kB' 'Slab: 605936 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 401832 kB' 'KernelStack: 22400 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7913372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214552 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.206 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 03:54:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.207 03:54:48 -- setup/common.sh@33 -- # echo 0 00:03:34.207 03:54:48 -- setup/common.sh@33 -- # return 0 00:03:34.207 03:54:48 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.207 03:54:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.207 nr_hugepages=1024 00:03:34.207 03:54:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.207 resv_hugepages=0 00:03:34.207 03:54:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.207 surplus_hugepages=0 00:03:34.207 03:54:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.207 anon_hugepages=0 00:03:34.207 03:54:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.207 03:54:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.207 03:54:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.207 03:54:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.207 03:54:48 -- setup/common.sh@18 -- # local node= 00:03:34.207 03:54:48 -- setup/common.sh@19 -- # local var val 00:03:34.207 03:54:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.207 03:54:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.207 03:54:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.207 03:54:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.207 03:54:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.207 03:54:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93315204 kB' 'MemFree: 79476252 kB' 'MemAvailable: 82927448 kB' 'Buffers: 9460 kB' 'Cached: 8824744 kB' 'SwapCached: 0 kB' 'Active: 6187704 kB' 'Inactive: 3400272 kB' 'Active(anon): 5642852 kB' 'Inactive(anon): 0 kB' 'Active(file): 544852 kB' 'Inactive(file): 3400272 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 757144 kB' 'Mapped: 144840 kB' 'Shmem: 4889080 kB' 'KReclaimable: 204104 kB' 'Slab: 605904 kB' 'SReclaimable: 204104 kB' 'SUnreclaim: 401800 kB' 'KernelStack: 22416 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53997628 kB' 'Committed_AS: 7913020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214568 kB' 'VmallocChunk: 0 kB' 'Percpu: 68096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 603620 kB' 'DirectMap2M: 9558016 kB' 'DirectMap1G: 92274688 kB' 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.208 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 03:54:48 -- setup/common.sh@33 -- # echo 1024 00:03:34.209 03:54:48 -- setup/common.sh@33 -- # return 0 00:03:34.209 03:54:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.209 03:54:48 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.209 03:54:48 -- setup/hugepages.sh@27 -- # local node 00:03:34.209 03:54:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.209 03:54:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.209 03:54:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.209 03:54:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:34.209 03:54:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.209 03:54:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.209 03:54:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.209 03:54:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.209 03:54:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.209 03:54:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.209 03:54:48 -- setup/common.sh@18 -- # local node=0 00:03:34.209 03:54:48 -- setup/common.sh@19 -- # local var val 00:03:34.209 03:54:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.209 03:54:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.209 03:54:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.209 03:54:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.209 03:54:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.209 03:54:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32581956 kB' 'MemFree: 26563112 kB' 'MemUsed: 6018844 kB' 'SwapCached: 0 kB' 'Active: 2272712 kB' 'Inactive: 152144 kB' 'Active(anon): 1904964 kB' 'Inactive(anon): 0 kB' 'Active(file): 367748 kB' 'Inactive(file): 152144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1906092 kB' 'Mapped: 108592 kB' 'AnonPages: 522068 kB' 'Shmem: 1386200 kB' 'KernelStack: 12728 kB' 'PageTables: 5240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 136716 kB' 'Slab: 401808 kB' 'SReclaimable: 136716 kB' 'SUnreclaim: 265092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.209 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # continue 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 03:54:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 03:54:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 03:54:48 -- setup/common.sh@33 -- # echo 0 00:03:34.210 03:54:48 -- setup/common.sh@33 -- # return 0 00:03:34.210 03:54:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.210 03:54:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.210 03:54:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.210 03:54:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.210 03:54:48 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.210 node0=1024 expecting 1024 00:03:34.210 03:54:48 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.210 00:03:34.210 real 0m8.316s 00:03:34.210 user 0m3.164s 00:03:34.210 sys 0m5.207s 00:03:34.210 03:54:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.210 03:54:48 -- common/autotest_common.sh@10 -- # set +x 00:03:34.210 ************************************ 00:03:34.210 END TEST no_shrink_alloc 00:03:34.210 ************************************ 00:03:34.210 03:54:48 -- setup/hugepages.sh@217 -- # clear_hp 00:03:34.210 03:54:48 -- setup/hugepages.sh@37 -- # local node hp 00:03:34.210 03:54:48 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.210 03:54:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.210 03:54:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:34.210 03:54:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.210 03:54:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:34.210 03:54:48 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.210 03:54:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.210 03:54:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:34.210 03:54:48 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.210 03:54:48 -- setup/hugepages.sh@41 -- # echo 0 00:03:34.210 03:54:48 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:34.210 03:54:48 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:34.210 00:03:34.210 real 0m33.981s 00:03:34.210 user 0m11.936s 00:03:34.210 sys 0m18.882s 00:03:34.210 03:54:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.210 03:54:48 -- common/autotest_common.sh@10 -- # set +x 00:03:34.210 ************************************ 00:03:34.210 END TEST hugepages 00:03:34.210 ************************************ 00:03:34.470 03:54:48 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:34.470 03:54:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.470 03:54:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.470 03:54:48 -- common/autotest_common.sh@10 -- # set +x 00:03:34.470 ************************************ 00:03:34.470 START TEST driver 00:03:34.470 ************************************ 00:03:34.470 03:54:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:34.470 * Looking for test storage... 00:03:34.470 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:34.470 03:54:48 -- setup/driver.sh@68 -- # setup reset 00:03:34.470 03:54:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.470 03:54:48 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.753 03:54:54 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:39.753 03:54:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:39.753 03:54:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:39.753 03:54:54 -- common/autotest_common.sh@10 -- # set +x 00:03:40.014 ************************************ 00:03:40.014 START TEST guess_driver 00:03:40.014 ************************************ 00:03:40.014 03:54:54 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:40.014 03:54:54 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:40.014 03:54:54 -- setup/driver.sh@47 -- # local fail=0 00:03:40.014 03:54:54 -- setup/driver.sh@49 -- # pick_driver 00:03:40.014 03:54:54 -- setup/driver.sh@36 -- # vfio 00:03:40.014 03:54:54 -- setup/driver.sh@21 -- # local iommu_grups 00:03:40.014 03:54:54 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:40.014 03:54:54 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:40.014 03:54:54 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:40.014 03:54:54 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:40.014 03:54:54 -- setup/driver.sh@29 -- # (( 181 > 0 )) 00:03:40.014 03:54:54 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:40.014 03:54:54 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:40.014 03:54:54 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:40.014 03:54:54 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:40.014 03:54:54 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:40.014 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:40.014 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:40.014 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:40.014 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:40.014 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:40.014 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:40.015 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:40.015 03:54:54 -- setup/driver.sh@30 -- # return 0 00:03:40.015 03:54:54 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:40.015 03:54:54 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:40.015 03:54:54 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:40.015 03:54:54 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:40.015 Looking for driver=vfio-pci 00:03:40.015 03:54:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.015 03:54:54 -- setup/driver.sh@45 -- # setup output config 00:03:40.015 03:54:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.015 03:54:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.311 03:54:57 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:43.311 03:54:57 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:43.311 03:54:57 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.606 03:55:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:46.606 03:55:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:46.606 03:55:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.544 03:55:01 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:47.544 03:55:01 -- setup/driver.sh@65 -- # setup reset 00:03:47.544 03:55:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.544 03:55:01 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.828 00:03:52.828 real 0m12.864s 00:03:52.828 user 0m3.197s 00:03:52.828 sys 0m5.566s 00:03:52.828 03:55:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:52.828 03:55:07 -- common/autotest_common.sh@10 -- # set +x 00:03:52.828 ************************************ 00:03:52.828 END TEST guess_driver 00:03:52.828 ************************************ 00:03:52.828 00:03:52.828 real 0m18.408s 00:03:52.828 user 0m4.827s 00:03:52.828 sys 0m8.589s 00:03:52.828 03:55:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:52.828 03:55:07 -- common/autotest_common.sh@10 -- # set +x 00:03:52.828 ************************************ 00:03:52.828 END TEST driver 00:03:52.828 ************************************ 00:03:52.828 03:55:07 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:52.828 03:55:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.828 03:55:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.828 03:55:07 -- common/autotest_common.sh@10 -- # set +x 00:03:53.088 ************************************ 00:03:53.088 START TEST devices 00:03:53.088 ************************************ 00:03:53.088 03:55:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:53.088 * Looking for test storage... 00:03:53.088 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:53.088 03:55:07 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:53.088 03:55:07 -- setup/devices.sh@192 -- # setup reset 00:03:53.088 03:55:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.088 03:55:07 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.371 03:55:11 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:58.371 03:55:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:58.371 03:55:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:58.371 03:55:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:58.371 03:55:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:58.371 03:55:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:58.371 03:55:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:58.371 03:55:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.371 03:55:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:58.371 03:55:11 -- setup/devices.sh@196 -- # blocks=() 00:03:58.371 03:55:11 -- setup/devices.sh@196 -- # declare -a blocks 00:03:58.371 03:55:11 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:58.371 03:55:11 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:58.371 03:55:11 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:58.371 03:55:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:58.371 03:55:11 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:58.371 03:55:11 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:58.371 03:55:11 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:58.371 03:55:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:58.371 03:55:11 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:58.371 03:55:11 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:58.371 03:55:11 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:58.371 No valid GPT data, bailing 00:03:58.371 03:55:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:58.371 03:55:11 -- scripts/common.sh@391 -- # pt= 00:03:58.371 03:55:11 -- scripts/common.sh@392 -- # return 1 00:03:58.371 03:55:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:58.371 03:55:11 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:58.371 03:55:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:58.371 03:55:11 -- setup/common.sh@80 -- # echo 4000787030016 00:03:58.371 03:55:11 -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:03:58.371 03:55:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:58.371 03:55:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:58.371 03:55:11 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:58.371 03:55:11 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:58.371 03:55:11 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:58.371 03:55:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.371 03:55:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.371 03:55:11 -- common/autotest_common.sh@10 -- # set +x 00:03:58.371 ************************************ 00:03:58.371 START TEST nvme_mount 00:03:58.371 ************************************ 00:03:58.371 03:55:12 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:58.371 03:55:12 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:58.371 03:55:12 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:58.371 03:55:12 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.371 03:55:12 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.371 03:55:12 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:58.371 03:55:12 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:58.371 03:55:12 -- setup/common.sh@40 -- # local part_no=1 00:03:58.371 03:55:12 -- setup/common.sh@41 -- # local size=1073741824 00:03:58.371 03:55:12 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:58.371 03:55:12 -- setup/common.sh@44 -- # parts=() 00:03:58.371 03:55:12 -- setup/common.sh@44 -- # local parts 00:03:58.371 03:55:12 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:58.371 03:55:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.371 03:55:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:58.371 03:55:12 -- setup/common.sh@46 -- # (( part++ )) 00:03:58.371 03:55:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.371 03:55:12 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:58.371 03:55:12 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:58.371 03:55:12 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:58.630 Creating new GPT entries in memory. 00:03:58.630 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:58.631 other utilities. 00:03:58.631 03:55:13 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:58.631 03:55:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.631 03:55:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.631 03:55:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.631 03:55:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:00.010 Creating new GPT entries in memory. 00:04:00.010 The operation has completed successfully. 00:04:00.010 03:55:14 -- setup/common.sh@57 -- # (( part++ )) 00:04:00.010 03:55:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.011 03:55:14 -- setup/common.sh@62 -- # wait 103441 00:04:00.011 03:55:14 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.011 03:55:14 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:00.011 03:55:14 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.011 03:55:14 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:00.011 03:55:14 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:00.011 03:55:14 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.011 03:55:14 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.011 03:55:14 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:00.011 03:55:14 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:00.011 03:55:14 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.011 03:55:14 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.011 03:55:14 -- setup/devices.sh@53 -- # local found=0 00:04:00.011 03:55:14 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.011 03:55:14 -- setup/devices.sh@56 -- # : 00:04:00.011 03:55:14 -- setup/devices.sh@59 -- # local pci status 00:04:00.011 03:55:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.011 03:55:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:00.011 03:55:14 -- setup/devices.sh@47 -- # setup output config 00:04:00.011 03:55:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.011 03:55:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:02.551 03:55:16 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:02.551 03:55:16 -- setup/devices.sh@63 -- # found=1 00:04:02.551 03:55:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.991 03:55:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.991 03:55:18 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:03.991 03:55:18 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.991 03:55:18 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:03.991 03:55:18 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:03.991 03:55:18 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:03.991 03:55:18 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.991 03:55:18 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.991 03:55:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.991 03:55:18 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:03.991 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:03.991 03:55:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.991 03:55:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:03.991 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:03.991 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:03.991 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:03.991 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:03.991 03:55:18 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:03.991 03:55:18 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:03.991 03:55:18 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.991 03:55:18 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:03.991 03:55:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:04.251 03:55:18 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.251 03:55:18 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.251 03:55:18 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:04.251 03:55:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:04.251 03:55:18 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.251 03:55:18 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.251 03:55:18 -- setup/devices.sh@53 -- # local found=0 00:04:04.251 03:55:18 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.251 03:55:18 -- setup/devices.sh@56 -- # : 00:04:04.251 03:55:18 -- setup/devices.sh@59 -- # local pci status 00:04:04.251 03:55:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.251 03:55:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:04.251 03:55:18 -- setup/devices.sh@47 -- # setup output config 00:04:04.251 03:55:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.251 03:55:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.794 03:55:21 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:06.794 03:55:21 -- setup/devices.sh@63 -- # found=1 00:04:06.794 03:55:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.176 03:55:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.177 03:55:22 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:08.177 03:55:22 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.177 03:55:22 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.177 03:55:22 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:08.177 03:55:22 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.177 03:55:22 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:08.177 03:55:22 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:08.177 03:55:22 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:08.177 03:55:22 -- setup/devices.sh@50 -- # local mount_point= 00:04:08.177 03:55:22 -- setup/devices.sh@51 -- # local test_file= 00:04:08.177 03:55:22 -- setup/devices.sh@53 -- # local found=0 00:04:08.177 03:55:22 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:08.177 03:55:22 -- setup/devices.sh@59 -- # local pci status 00:04:08.177 03:55:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.177 03:55:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:08.177 03:55:22 -- setup/devices.sh@47 -- # setup output config 00:04:08.177 03:55:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.177 03:55:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:10.722 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.722 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.722 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.722 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.722 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.722 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.722 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.722 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.722 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.722 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.723 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.723 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.723 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.723 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.723 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.723 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.723 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.723 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.723 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.723 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.723 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.723 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.723 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.723 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.723 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.723 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.723 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.723 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.723 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.723 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.723 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.723 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.982 03:55:25 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:10.982 03:55:25 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:10.982 03:55:25 -- setup/devices.sh@63 -- # found=1 00:04:10.982 03:55:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.365 03:55:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:12.365 03:55:26 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:12.365 03:55:26 -- setup/devices.sh@68 -- # return 0 00:04:12.365 03:55:26 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:12.365 03:55:26 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.365 03:55:26 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:12.365 03:55:26 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:12.365 03:55:26 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:12.365 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:12.365 00:04:12.365 real 0m14.599s 00:04:12.365 user 0m4.584s 00:04:12.365 sys 0m7.761s 00:04:12.365 03:55:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.365 03:55:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.365 ************************************ 00:04:12.365 END TEST nvme_mount 00:04:12.365 ************************************ 00:04:12.365 03:55:26 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:12.365 03:55:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.365 03:55:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.365 03:55:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.365 ************************************ 00:04:12.365 START TEST dm_mount 00:04:12.365 ************************************ 00:04:12.365 03:55:26 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:12.365 03:55:26 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:12.365 03:55:26 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:12.365 03:55:26 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:12.365 03:55:26 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:12.365 03:55:26 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:12.365 03:55:26 -- setup/common.sh@40 -- # local part_no=2 00:04:12.365 03:55:26 -- setup/common.sh@41 -- # local size=1073741824 00:04:12.365 03:55:26 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:12.365 03:55:26 -- setup/common.sh@44 -- # parts=() 00:04:12.365 03:55:26 -- setup/common.sh@44 -- # local parts 00:04:12.365 03:55:26 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:12.365 03:55:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.365 03:55:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:12.365 03:55:26 -- setup/common.sh@46 -- # (( part++ )) 00:04:12.365 03:55:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.365 03:55:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:12.365 03:55:26 -- setup/common.sh@46 -- # (( part++ )) 00:04:12.365 03:55:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:12.365 03:55:26 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:12.365 03:55:26 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:12.365 03:55:26 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:13.747 Creating new GPT entries in memory. 00:04:13.747 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:13.747 other utilities. 00:04:13.747 03:55:27 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:13.747 03:55:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.747 03:55:27 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.747 03:55:27 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.747 03:55:27 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:14.688 Creating new GPT entries in memory. 00:04:14.688 The operation has completed successfully. 00:04:14.688 03:55:28 -- setup/common.sh@57 -- # (( part++ )) 00:04:14.688 03:55:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.688 03:55:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.688 03:55:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.688 03:55:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:15.627 The operation has completed successfully. 00:04:15.627 03:55:29 -- setup/common.sh@57 -- # (( part++ )) 00:04:15.627 03:55:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.627 03:55:29 -- setup/common.sh@62 -- # wait 108496 00:04:15.628 03:55:29 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:15.628 03:55:29 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:15.628 03:55:29 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:15.628 03:55:29 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:15.628 03:55:29 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:15.628 03:55:29 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:15.628 03:55:29 -- setup/devices.sh@161 -- # break 00:04:15.628 03:55:29 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:15.628 03:55:29 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:15.628 03:55:29 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:15.628 03:55:29 -- setup/devices.sh@166 -- # dm=dm-0 00:04:15.628 03:55:29 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:15.628 03:55:29 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:15.628 03:55:29 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:15.628 03:55:29 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:15.628 03:55:29 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:15.628 03:55:29 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:15.628 03:55:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:15.628 03:55:30 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:15.628 03:55:30 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:15.628 03:55:30 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:15.628 03:55:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:15.628 03:55:30 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:15.628 03:55:30 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:15.628 03:55:30 -- setup/devices.sh@53 -- # local found=0 00:04:15.628 03:55:30 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:15.628 03:55:30 -- setup/devices.sh@56 -- # : 00:04:15.628 03:55:30 -- setup/devices.sh@59 -- # local pci status 00:04:15.628 03:55:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.628 03:55:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:15.628 03:55:30 -- setup/devices.sh@47 -- # setup output config 00:04:15.628 03:55:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.628 03:55:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.168 03:55:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:18.168 03:55:32 -- setup/devices.sh@63 -- # found=1 00:04:18.168 03:55:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.548 03:55:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.548 03:55:33 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:19.548 03:55:33 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:19.548 03:55:33 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.548 03:55:33 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.549 03:55:33 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:19.549 03:55:34 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:19.549 03:55:34 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:19.549 03:55:34 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:19.549 03:55:34 -- setup/devices.sh@50 -- # local mount_point= 00:04:19.549 03:55:34 -- setup/devices.sh@51 -- # local test_file= 00:04:19.549 03:55:34 -- setup/devices.sh@53 -- # local found=0 00:04:19.549 03:55:34 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.549 03:55:34 -- setup/devices.sh@59 -- # local pci status 00:04:19.549 03:55:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:19.549 03:55:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.549 03:55:34 -- setup/devices.sh@47 -- # setup output config 00:04:19.549 03:55:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.549 03:55:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.091 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.091 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.092 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.092 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.092 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.351 03:55:36 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.351 03:55:36 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:22.351 03:55:36 -- setup/devices.sh@63 -- # found=1 00:04:22.351 03:55:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.732 03:55:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.732 03:55:37 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:23.732 03:55:37 -- setup/devices.sh@68 -- # return 0 00:04:23.732 03:55:37 -- setup/devices.sh@187 -- # cleanup_dm 00:04:23.732 03:55:37 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:23.732 03:55:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:23.732 03:55:37 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:23.732 03:55:38 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.732 03:55:38 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:23.732 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.732 03:55:38 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:23.732 03:55:38 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:23.732 00:04:23.732 real 0m11.203s 00:04:23.732 user 0m2.911s 00:04:23.732 sys 0m5.299s 00:04:23.732 03:55:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.732 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:04:23.732 ************************************ 00:04:23.732 END TEST dm_mount 00:04:23.732 ************************************ 00:04:23.732 03:55:38 -- setup/devices.sh@1 -- # cleanup 00:04:23.732 03:55:38 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:23.732 03:55:38 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.732 03:55:38 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.732 03:55:38 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:23.733 03:55:38 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.733 03:55:38 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.993 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:23.993 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:23.993 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:23.993 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:23.993 03:55:38 -- setup/devices.sh@12 -- # cleanup_dm 00:04:23.993 03:55:38 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:23.993 03:55:38 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:23.993 03:55:38 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.993 03:55:38 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:23.993 03:55:38 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.993 03:55:38 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:23.993 00:04:23.993 real 0m30.914s 00:04:23.993 user 0m9.343s 00:04:23.993 sys 0m16.138s 00:04:23.993 03:55:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.993 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:04:23.993 ************************************ 00:04:23.993 END TEST devices 00:04:23.993 ************************************ 00:04:23.993 00:04:23.993 real 1m52.919s 00:04:23.993 user 0m35.480s 00:04:23.993 sys 0m59.888s 00:04:23.993 03:55:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.993 03:55:38 -- common/autotest_common.sh@10 -- # set +x 00:04:23.993 ************************************ 00:04:23.993 END TEST setup.sh 00:04:23.993 ************************************ 00:04:23.993 03:55:38 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:27.291 Hugepages 00:04:27.291 node hugesize free / total 00:04:27.291 node0 1048576kB 0 / 0 00:04:27.291 node0 2048kB 2048 / 2048 00:04:27.291 node1 1048576kB 0 / 0 00:04:27.291 node1 2048kB 0 / 0 00:04:27.291 00:04:27.291 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.291 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:27.291 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:27.291 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:27.291 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:27.291 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:27.291 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:27.291 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:27.291 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:27.291 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:27.291 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:27.291 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:27.291 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:27.291 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:27.291 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:27.291 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:27.291 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:27.291 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:27.291 03:55:41 -- spdk/autotest.sh@130 -- # uname -s 00:04:27.291 03:55:41 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:27.291 03:55:41 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:27.291 03:55:41 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:29.831 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.831 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:33.126 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:34.506 03:55:48 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:35.445 03:55:49 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:35.445 03:55:49 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:35.445 03:55:49 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:35.445 03:55:49 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:35.445 03:55:49 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:35.445 03:55:49 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:35.445 03:55:49 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.445 03:55:49 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:35.445 03:55:49 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:35.445 03:55:49 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:35.445 03:55:49 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:d8:00.0 00:04:35.445 03:55:49 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.739 Waiting for block devices as requested 00:04:38.739 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:38.739 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:38.739 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:38.739 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:38.739 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:38.739 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:38.739 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:38.739 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:38.739 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:38.999 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:38.999 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:38.999 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:38.999 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:39.258 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:39.258 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:39.258 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:39.518 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:40.897 03:55:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:40.897 03:55:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:40.897 03:55:55 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:40.898 03:55:55 -- common/autotest_common.sh@1488 -- # grep 0000:d8:00.0/nvme/nvme 00:04:40.898 03:55:55 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:40.898 03:55:55 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:40.898 03:55:55 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:40.898 03:55:55 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:40.898 03:55:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:40.898 03:55:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:40.898 03:55:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:40.898 03:55:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:40.898 03:55:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:40.898 03:55:55 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:40.898 03:55:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:40.898 03:55:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:40.898 03:55:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:40.898 03:55:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:40.898 03:55:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:40.898 03:55:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:40.898 03:55:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:40.898 03:55:55 -- common/autotest_common.sh@1543 -- # continue 00:04:40.898 03:55:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:40.898 03:55:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:40.898 03:55:55 -- common/autotest_common.sh@10 -- # set +x 00:04:40.898 03:55:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:40.898 03:55:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:40.898 03:55:55 -- common/autotest_common.sh@10 -- # set +x 00:04:40.898 03:55:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:43.442 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:43.442 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:43.442 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:43.442 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:43.442 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:43.702 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:43.702 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:43.702 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:43.702 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:43.702 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:43.702 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:43.702 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:43.702 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:43.702 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:43.702 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:43.702 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:46.996 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:48.374 03:56:02 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:48.374 03:56:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:48.374 03:56:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.374 03:56:02 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:48.374 03:56:02 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:48.374 03:56:02 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:48.374 03:56:02 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:48.374 03:56:02 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:48.374 03:56:02 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:48.374 03:56:02 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:48.374 03:56:02 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:48.374 03:56:02 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.374 03:56:02 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:48.374 03:56:02 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:48.374 03:56:02 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:48.374 03:56:02 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:d8:00.0 00:04:48.374 03:56:02 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:48.374 03:56:02 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:48.374 03:56:02 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:48.374 03:56:02 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:48.374 03:56:02 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:48.374 03:56:02 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:d8:00.0 00:04:48.374 03:56:02 -- common/autotest_common.sh@1578 -- # [[ -z 0000:d8:00.0 ]] 00:04:48.374 03:56:02 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=119130 00:04:48.374 03:56:02 -- common/autotest_common.sh@1584 -- # waitforlisten 119130 00:04:48.374 03:56:02 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.374 03:56:02 -- common/autotest_common.sh@817 -- # '[' -z 119130 ']' 00:04:48.374 03:56:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.374 03:56:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:48.374 03:56:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.374 03:56:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:48.374 03:56:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.374 [2024-04-19 03:56:02.838959] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:04:48.374 [2024-04-19 03:56:02.839002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119130 ] 00:04:48.374 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.633 [2024-04-19 03:56:02.907655] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.633 [2024-04-19 03:56:02.979756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.200 03:56:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:49.200 03:56:03 -- common/autotest_common.sh@850 -- # return 0 00:04:49.200 03:56:03 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:04:49.200 03:56:03 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:49.201 03:56:03 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:52.493 nvme0n1 00:04:52.493 03:56:06 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:52.493 [2024-04-19 03:56:06.721618] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:52.493 request: 00:04:52.493 { 00:04:52.493 "nvme_ctrlr_name": "nvme0", 00:04:52.493 "password": "test", 00:04:52.493 "method": "bdev_nvme_opal_revert", 00:04:52.493 "req_id": 1 00:04:52.493 } 00:04:52.493 Got JSON-RPC error response 00:04:52.493 response: 00:04:52.493 { 00:04:52.493 "code": -32602, 00:04:52.493 "message": "Invalid parameters" 00:04:52.493 } 00:04:52.493 03:56:06 -- common/autotest_common.sh@1590 -- # true 00:04:52.493 03:56:06 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:52.493 03:56:06 -- common/autotest_common.sh@1594 -- # killprocess 119130 00:04:52.493 03:56:06 -- common/autotest_common.sh@936 -- # '[' -z 119130 ']' 00:04:52.493 03:56:06 -- common/autotest_common.sh@940 -- # kill -0 119130 00:04:52.493 03:56:06 -- common/autotest_common.sh@941 -- # uname 00:04:52.493 03:56:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:52.493 03:56:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119130 00:04:52.493 03:56:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:52.493 03:56:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:52.493 03:56:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119130' 00:04:52.493 killing process with pid 119130 00:04:52.493 03:56:06 -- common/autotest_common.sh@955 -- # kill 119130 00:04:52.493 03:56:06 -- common/autotest_common.sh@960 -- # wait 119130 00:04:56.691 03:56:10 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:56.691 03:56:10 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:56.691 03:56:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:56.691 03:56:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:56.691 03:56:10 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:56.691 03:56:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:56.691 03:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.691 03:56:10 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:56.691 03:56:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.691 03:56:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.691 03:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.691 ************************************ 00:04:56.691 START TEST env 00:04:56.691 ************************************ 00:04:56.691 03:56:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:56.691 * Looking for test storage... 00:04:56.691 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:56.691 03:56:10 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:56.691 03:56:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.691 03:56:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.691 03:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.691 ************************************ 00:04:56.691 START TEST env_memory 00:04:56.691 ************************************ 00:04:56.691 03:56:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:56.691 00:04:56.691 00:04:56.691 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.691 http://cunit.sourceforge.net/ 00:04:56.691 00:04:56.691 00:04:56.691 Suite: memory 00:04:56.691 Test: alloc and free memory map ...[2024-04-19 03:56:11.091733] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:56.691 passed 00:04:56.691 Test: mem map translation ...[2024-04-19 03:56:11.108601] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:56.691 [2024-04-19 03:56:11.108615] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:56.691 [2024-04-19 03:56:11.108646] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:56.691 [2024-04-19 03:56:11.108653] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:56.691 passed 00:04:56.691 Test: mem map registration ...[2024-04-19 03:56:11.142138] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:56.692 [2024-04-19 03:56:11.142152] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:56.692 passed 00:04:56.692 Test: mem map adjacent registrations ...passed 00:04:56.692 00:04:56.692 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.692 suites 1 1 n/a 0 0 00:04:56.692 tests 4 4 4 0 0 00:04:56.692 asserts 152 152 152 0 n/a 00:04:56.692 00:04:56.692 Elapsed time = 0.127 seconds 00:04:56.692 00:04:56.692 real 0m0.138s 00:04:56.692 user 0m0.131s 00:04:56.692 sys 0m0.006s 00:04:56.692 03:56:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.692 03:56:11 -- common/autotest_common.sh@10 -- # set +x 00:04:56.692 ************************************ 00:04:56.692 END TEST env_memory 00:04:56.692 ************************************ 00:04:56.692 03:56:11 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:56.692 03:56:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.953 03:56:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.954 03:56:11 -- common/autotest_common.sh@10 -- # set +x 00:04:56.954 ************************************ 00:04:56.954 START TEST env_vtophys 00:04:56.954 ************************************ 00:04:56.954 03:56:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:56.954 EAL: lib.eal log level changed from notice to debug 00:04:56.954 EAL: Detected lcore 0 as core 0 on socket 0 00:04:56.954 EAL: Detected lcore 1 as core 1 on socket 0 00:04:56.954 EAL: Detected lcore 2 as core 2 on socket 0 00:04:56.954 EAL: Detected lcore 3 as core 3 on socket 0 00:04:56.954 EAL: Detected lcore 4 as core 4 on socket 0 00:04:56.954 EAL: Detected lcore 5 as core 5 on socket 0 00:04:56.954 EAL: Detected lcore 6 as core 6 on socket 0 00:04:56.954 EAL: Detected lcore 7 as core 8 on socket 0 00:04:56.954 EAL: Detected lcore 8 as core 9 on socket 0 00:04:56.954 EAL: Detected lcore 9 as core 10 on socket 0 00:04:56.954 EAL: Detected lcore 10 as core 11 on socket 0 00:04:56.954 EAL: Detected lcore 11 as core 12 on socket 0 00:04:56.954 EAL: Detected lcore 12 as core 13 on socket 0 00:04:56.954 EAL: Detected lcore 13 as core 14 on socket 0 00:04:56.954 EAL: Detected lcore 14 as core 16 on socket 0 00:04:56.954 EAL: Detected lcore 15 as core 17 on socket 0 00:04:56.954 EAL: Detected lcore 16 as core 18 on socket 0 00:04:56.954 EAL: Detected lcore 17 as core 19 on socket 0 00:04:56.954 EAL: Detected lcore 18 as core 20 on socket 0 00:04:56.954 EAL: Detected lcore 19 as core 21 on socket 0 00:04:56.954 EAL: Detected lcore 20 as core 22 on socket 0 00:04:56.954 EAL: Detected lcore 21 as core 24 on socket 0 00:04:56.954 EAL: Detected lcore 22 as core 25 on socket 0 00:04:56.954 EAL: Detected lcore 23 as core 26 on socket 0 00:04:56.954 EAL: Detected lcore 24 as core 27 on socket 0 00:04:56.954 EAL: Detected lcore 25 as core 28 on socket 0 00:04:56.954 EAL: Detected lcore 26 as core 29 on socket 0 00:04:56.954 EAL: Detected lcore 27 as core 30 on socket 0 00:04:56.954 EAL: Detected lcore 28 as core 0 on socket 1 00:04:56.954 EAL: Detected lcore 29 as core 1 on socket 1 00:04:56.954 EAL: Detected lcore 30 as core 2 on socket 1 00:04:56.954 EAL: Detected lcore 31 as core 3 on socket 1 00:04:56.954 EAL: Detected lcore 32 as core 4 on socket 1 00:04:56.954 EAL: Detected lcore 33 as core 5 on socket 1 00:04:56.954 EAL: Detected lcore 34 as core 6 on socket 1 00:04:56.954 EAL: Detected lcore 35 as core 8 on socket 1 00:04:56.954 EAL: Detected lcore 36 as core 9 on socket 1 00:04:56.954 EAL: Detected lcore 37 as core 10 on socket 1 00:04:56.954 EAL: Detected lcore 38 as core 11 on socket 1 00:04:56.954 EAL: Detected lcore 39 as core 12 on socket 1 00:04:56.954 EAL: Detected lcore 40 as core 13 on socket 1 00:04:56.954 EAL: Detected lcore 41 as core 14 on socket 1 00:04:56.954 EAL: Detected lcore 42 as core 16 on socket 1 00:04:56.954 EAL: Detected lcore 43 as core 17 on socket 1 00:04:56.954 EAL: Detected lcore 44 as core 18 on socket 1 00:04:56.954 EAL: Detected lcore 45 as core 19 on socket 1 00:04:56.954 EAL: Detected lcore 46 as core 20 on socket 1 00:04:56.954 EAL: Detected lcore 47 as core 21 on socket 1 00:04:56.954 EAL: Detected lcore 48 as core 22 on socket 1 00:04:56.954 EAL: Detected lcore 49 as core 24 on socket 1 00:04:56.954 EAL: Detected lcore 50 as core 25 on socket 1 00:04:56.954 EAL: Detected lcore 51 as core 26 on socket 1 00:04:56.954 EAL: Detected lcore 52 as core 27 on socket 1 00:04:56.954 EAL: Detected lcore 53 as core 28 on socket 1 00:04:56.954 EAL: Detected lcore 54 as core 29 on socket 1 00:04:56.954 EAL: Detected lcore 55 as core 30 on socket 1 00:04:56.954 EAL: Detected lcore 56 as core 0 on socket 0 00:04:56.954 EAL: Detected lcore 57 as core 1 on socket 0 00:04:56.954 EAL: Detected lcore 58 as core 2 on socket 0 00:04:56.954 EAL: Detected lcore 59 as core 3 on socket 0 00:04:56.954 EAL: Detected lcore 60 as core 4 on socket 0 00:04:56.954 EAL: Detected lcore 61 as core 5 on socket 0 00:04:56.954 EAL: Detected lcore 62 as core 6 on socket 0 00:04:56.954 EAL: Detected lcore 63 as core 8 on socket 0 00:04:56.954 EAL: Detected lcore 64 as core 9 on socket 0 00:04:56.954 EAL: Detected lcore 65 as core 10 on socket 0 00:04:56.954 EAL: Detected lcore 66 as core 11 on socket 0 00:04:56.954 EAL: Detected lcore 67 as core 12 on socket 0 00:04:56.954 EAL: Detected lcore 68 as core 13 on socket 0 00:04:56.954 EAL: Detected lcore 69 as core 14 on socket 0 00:04:56.954 EAL: Detected lcore 70 as core 16 on socket 0 00:04:56.954 EAL: Detected lcore 71 as core 17 on socket 0 00:04:56.954 EAL: Detected lcore 72 as core 18 on socket 0 00:04:56.954 EAL: Detected lcore 73 as core 19 on socket 0 00:04:56.954 EAL: Detected lcore 74 as core 20 on socket 0 00:04:56.954 EAL: Detected lcore 75 as core 21 on socket 0 00:04:56.954 EAL: Detected lcore 76 as core 22 on socket 0 00:04:56.954 EAL: Detected lcore 77 as core 24 on socket 0 00:04:56.954 EAL: Detected lcore 78 as core 25 on socket 0 00:04:56.954 EAL: Detected lcore 79 as core 26 on socket 0 00:04:56.954 EAL: Detected lcore 80 as core 27 on socket 0 00:04:56.954 EAL: Detected lcore 81 as core 28 on socket 0 00:04:56.954 EAL: Detected lcore 82 as core 29 on socket 0 00:04:56.954 EAL: Detected lcore 83 as core 30 on socket 0 00:04:56.954 EAL: Detected lcore 84 as core 0 on socket 1 00:04:56.954 EAL: Detected lcore 85 as core 1 on socket 1 00:04:56.954 EAL: Detected lcore 86 as core 2 on socket 1 00:04:56.954 EAL: Detected lcore 87 as core 3 on socket 1 00:04:56.954 EAL: Detected lcore 88 as core 4 on socket 1 00:04:56.954 EAL: Detected lcore 89 as core 5 on socket 1 00:04:56.954 EAL: Detected lcore 90 as core 6 on socket 1 00:04:56.954 EAL: Detected lcore 91 as core 8 on socket 1 00:04:56.954 EAL: Detected lcore 92 as core 9 on socket 1 00:04:56.954 EAL: Detected lcore 93 as core 10 on socket 1 00:04:56.954 EAL: Detected lcore 94 as core 11 on socket 1 00:04:56.954 EAL: Detected lcore 95 as core 12 on socket 1 00:04:56.954 EAL: Detected lcore 96 as core 13 on socket 1 00:04:56.954 EAL: Detected lcore 97 as core 14 on socket 1 00:04:56.954 EAL: Detected lcore 98 as core 16 on socket 1 00:04:56.954 EAL: Detected lcore 99 as core 17 on socket 1 00:04:56.954 EAL: Detected lcore 100 as core 18 on socket 1 00:04:56.954 EAL: Detected lcore 101 as core 19 on socket 1 00:04:56.954 EAL: Detected lcore 102 as core 20 on socket 1 00:04:56.954 EAL: Detected lcore 103 as core 21 on socket 1 00:04:56.954 EAL: Detected lcore 104 as core 22 on socket 1 00:04:56.954 EAL: Detected lcore 105 as core 24 on socket 1 00:04:56.954 EAL: Detected lcore 106 as core 25 on socket 1 00:04:56.954 EAL: Detected lcore 107 as core 26 on socket 1 00:04:56.954 EAL: Detected lcore 108 as core 27 on socket 1 00:04:56.954 EAL: Detected lcore 109 as core 28 on socket 1 00:04:56.954 EAL: Detected lcore 110 as core 29 on socket 1 00:04:56.954 EAL: Detected lcore 111 as core 30 on socket 1 00:04:56.954 EAL: Maximum logical cores by configuration: 128 00:04:56.954 EAL: Detected CPU lcores: 112 00:04:56.954 EAL: Detected NUMA nodes: 2 00:04:56.954 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:56.954 EAL: Detected shared linkage of DPDK 00:04:56.954 EAL: No shared files mode enabled, IPC will be disabled 00:04:56.954 EAL: Bus pci wants IOVA as 'DC' 00:04:56.954 EAL: Buses did not request a specific IOVA mode. 00:04:56.954 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:56.954 EAL: Selected IOVA mode 'VA' 00:04:56.954 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.954 EAL: Probing VFIO support... 00:04:56.954 EAL: IOMMU type 1 (Type 1) is supported 00:04:56.954 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:56.954 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:56.954 EAL: VFIO support initialized 00:04:56.954 EAL: Ask a virtual area of 0x2e000 bytes 00:04:56.954 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:56.954 EAL: Setting up physically contiguous memory... 00:04:56.954 EAL: Setting maximum number of open files to 524288 00:04:56.954 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:56.954 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:56.954 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:56.954 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.954 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:56.954 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.954 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.954 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:56.954 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:56.954 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.954 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:56.954 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.954 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.954 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:56.954 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:56.954 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.954 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:56.954 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.954 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.954 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:56.954 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:56.954 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.954 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:56.954 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.954 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.954 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:56.954 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:56.954 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:56.954 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.954 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:56.954 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.954 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.954 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:56.954 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:56.954 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.954 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:56.954 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.954 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.954 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:56.954 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:56.954 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.954 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:56.954 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.954 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.954 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:56.955 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:56.955 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.955 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:56.955 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.955 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.955 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:56.955 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:56.955 EAL: Hugepages will be freed exactly as allocated. 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: TSC frequency is ~2700000 KHz 00:04:56.955 EAL: Main lcore 0 is ready (tid=7fe3a6ae7a00;cpuset=[0]) 00:04:56.955 EAL: Trying to obtain current memory policy. 00:04:56.955 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.955 EAL: Restoring previous memory policy: 0 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was expanded by 2MB 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:56.955 EAL: Mem event callback 'spdk:(nil)' registered 00:04:56.955 00:04:56.955 00:04:56.955 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.955 http://cunit.sourceforge.net/ 00:04:56.955 00:04:56.955 00:04:56.955 Suite: components_suite 00:04:56.955 Test: vtophys_malloc_test ...passed 00:04:56.955 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:56.955 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.955 EAL: Restoring previous memory policy: 4 00:04:56.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was expanded by 4MB 00:04:56.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was shrunk by 4MB 00:04:56.955 EAL: Trying to obtain current memory policy. 00:04:56.955 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.955 EAL: Restoring previous memory policy: 4 00:04:56.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was expanded by 6MB 00:04:56.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was shrunk by 6MB 00:04:56.955 EAL: Trying to obtain current memory policy. 00:04:56.955 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.955 EAL: Restoring previous memory policy: 4 00:04:56.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was expanded by 10MB 00:04:56.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was shrunk by 10MB 00:04:56.955 EAL: Trying to obtain current memory policy. 00:04:56.955 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.955 EAL: Restoring previous memory policy: 4 00:04:56.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was expanded by 18MB 00:04:56.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was shrunk by 18MB 00:04:56.955 EAL: Trying to obtain current memory policy. 00:04:56.955 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.955 EAL: Restoring previous memory policy: 4 00:04:56.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was expanded by 34MB 00:04:56.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was shrunk by 34MB 00:04:56.955 EAL: Trying to obtain current memory policy. 00:04:56.955 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.955 EAL: Restoring previous memory policy: 4 00:04:56.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.955 EAL: request: mp_malloc_sync 00:04:56.955 EAL: No shared files mode enabled, IPC is disabled 00:04:56.955 EAL: Heap on socket 0 was expanded by 66MB 00:04:57.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.215 EAL: request: mp_malloc_sync 00:04:57.215 EAL: No shared files mode enabled, IPC is disabled 00:04:57.215 EAL: Heap on socket 0 was shrunk by 66MB 00:04:57.215 EAL: Trying to obtain current memory policy. 00:04:57.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.215 EAL: Restoring previous memory policy: 4 00:04:57.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.215 EAL: request: mp_malloc_sync 00:04:57.215 EAL: No shared files mode enabled, IPC is disabled 00:04:57.215 EAL: Heap on socket 0 was expanded by 130MB 00:04:57.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.215 EAL: request: mp_malloc_sync 00:04:57.215 EAL: No shared files mode enabled, IPC is disabled 00:04:57.215 EAL: Heap on socket 0 was shrunk by 130MB 00:04:57.215 EAL: Trying to obtain current memory policy. 00:04:57.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.215 EAL: Restoring previous memory policy: 4 00:04:57.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.215 EAL: request: mp_malloc_sync 00:04:57.215 EAL: No shared files mode enabled, IPC is disabled 00:04:57.215 EAL: Heap on socket 0 was expanded by 258MB 00:04:57.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.215 EAL: request: mp_malloc_sync 00:04:57.215 EAL: No shared files mode enabled, IPC is disabled 00:04:57.215 EAL: Heap on socket 0 was shrunk by 258MB 00:04:57.215 EAL: Trying to obtain current memory policy. 00:04:57.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.475 EAL: Restoring previous memory policy: 4 00:04:57.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.475 EAL: request: mp_malloc_sync 00:04:57.475 EAL: No shared files mode enabled, IPC is disabled 00:04:57.475 EAL: Heap on socket 0 was expanded by 514MB 00:04:57.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.475 EAL: request: mp_malloc_sync 00:04:57.475 EAL: No shared files mode enabled, IPC is disabled 00:04:57.475 EAL: Heap on socket 0 was shrunk by 514MB 00:04:57.475 EAL: Trying to obtain current memory policy. 00:04:57.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.735 EAL: Restoring previous memory policy: 4 00:04:57.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.735 EAL: request: mp_malloc_sync 00:04:57.735 EAL: No shared files mode enabled, IPC is disabled 00:04:57.735 EAL: Heap on socket 0 was expanded by 1026MB 00:04:57.996 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.996 EAL: request: mp_malloc_sync 00:04:57.996 EAL: No shared files mode enabled, IPC is disabled 00:04:57.996 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:57.996 passed 00:04:57.996 00:04:57.996 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.996 suites 1 1 n/a 0 0 00:04:57.996 tests 2 2 2 0 0 00:04:57.996 asserts 497 497 497 0 n/a 00:04:57.996 00:04:57.996 Elapsed time = 0.968 seconds 00:04:57.996 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.996 EAL: request: mp_malloc_sync 00:04:57.996 EAL: No shared files mode enabled, IPC is disabled 00:04:57.996 EAL: Heap on socket 0 was shrunk by 2MB 00:04:57.996 EAL: No shared files mode enabled, IPC is disabled 00:04:57.996 EAL: No shared files mode enabled, IPC is disabled 00:04:57.996 EAL: No shared files mode enabled, IPC is disabled 00:04:57.996 00:04:57.996 real 0m1.100s 00:04:57.996 user 0m0.641s 00:04:57.996 sys 0m0.429s 00:04:57.996 03:56:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.996 03:56:12 -- common/autotest_common.sh@10 -- # set +x 00:04:57.996 ************************************ 00:04:57.996 END TEST env_vtophys 00:04:57.996 ************************************ 00:04:57.996 03:56:12 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:57.996 03:56:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.996 03:56:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.996 03:56:12 -- common/autotest_common.sh@10 -- # set +x 00:04:58.255 ************************************ 00:04:58.255 START TEST env_pci 00:04:58.255 ************************************ 00:04:58.255 03:56:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:58.255 00:04:58.255 00:04:58.255 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.255 http://cunit.sourceforge.net/ 00:04:58.255 00:04:58.255 00:04:58.255 Suite: pci 00:04:58.255 Test: pci_hook ...[2024-04-19 03:56:12.628483] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 121075 has claimed it 00:04:58.255 EAL: Cannot find device (10000:00:01.0) 00:04:58.255 EAL: Failed to attach device on primary process 00:04:58.255 passed 00:04:58.255 00:04:58.255 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.255 suites 1 1 n/a 0 0 00:04:58.255 tests 1 1 1 0 0 00:04:58.255 asserts 25 25 25 0 n/a 00:04:58.255 00:04:58.255 Elapsed time = 0.028 seconds 00:04:58.255 00:04:58.255 real 0m0.048s 00:04:58.255 user 0m0.019s 00:04:58.255 sys 0m0.028s 00:04:58.255 03:56:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.255 03:56:12 -- common/autotest_common.sh@10 -- # set +x 00:04:58.255 ************************************ 00:04:58.255 END TEST env_pci 00:04:58.255 ************************************ 00:04:58.255 03:56:12 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:58.255 03:56:12 -- env/env.sh@15 -- # uname 00:04:58.255 03:56:12 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:58.255 03:56:12 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:58.255 03:56:12 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.255 03:56:12 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:58.255 03:56:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.255 03:56:12 -- common/autotest_common.sh@10 -- # set +x 00:04:58.515 ************************************ 00:04:58.515 START TEST env_dpdk_post_init 00:04:58.515 ************************************ 00:04:58.515 03:56:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.515 EAL: Detected CPU lcores: 112 00:04:58.515 EAL: Detected NUMA nodes: 2 00:04:58.515 EAL: Detected shared linkage of DPDK 00:04:58.515 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.515 EAL: Selected IOVA mode 'VA' 00:04:58.515 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.515 EAL: VFIO support initialized 00:04:58.515 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:58.515 EAL: Using IOMMU type 1 (Type 1) 00:04:58.515 EAL: Ignore mapping IO port bar(1) 00:04:58.515 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:58.515 EAL: Ignore mapping IO port bar(1) 00:04:58.515 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:58.515 EAL: Ignore mapping IO port bar(1) 00:04:58.515 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:58.515 EAL: Ignore mapping IO port bar(1) 00:04:58.515 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:58.515 EAL: Ignore mapping IO port bar(1) 00:04:58.515 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:58.515 EAL: Ignore mapping IO port bar(1) 00:04:58.515 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:58.515 EAL: Ignore mapping IO port bar(1) 00:04:58.515 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:58.775 EAL: Ignore mapping IO port bar(1) 00:04:58.775 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:58.775 EAL: Ignore mapping IO port bar(1) 00:04:58.775 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:58.775 EAL: Ignore mapping IO port bar(1) 00:04:58.776 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:58.776 EAL: Ignore mapping IO port bar(1) 00:04:58.776 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:58.776 EAL: Ignore mapping IO port bar(1) 00:04:58.776 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:58.776 EAL: Ignore mapping IO port bar(1) 00:04:58.776 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:58.776 EAL: Ignore mapping IO port bar(1) 00:04:58.776 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:58.776 EAL: Ignore mapping IO port bar(1) 00:04:58.776 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:58.776 EAL: Ignore mapping IO port bar(1) 00:04:58.776 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:59.346 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:04.624 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:04.624 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:05.194 Starting DPDK initialization... 00:05:05.194 Starting SPDK post initialization... 00:05:05.194 SPDK NVMe probe 00:05:05.194 Attaching to 0000:d8:00.0 00:05:05.194 Attached to 0000:d8:00.0 00:05:05.194 Cleaning up... 00:05:05.194 00:05:05.194 real 0m6.624s 00:05:05.194 user 0m5.464s 00:05:05.194 sys 0m0.235s 00:05:05.194 03:56:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.194 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.194 ************************************ 00:05:05.194 END TEST env_dpdk_post_init 00:05:05.194 ************************************ 00:05:05.194 03:56:19 -- env/env.sh@26 -- # uname 00:05:05.194 03:56:19 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:05.194 03:56:19 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.194 03:56:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.194 03:56:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.194 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.194 ************************************ 00:05:05.194 START TEST env_mem_callbacks 00:05:05.194 ************************************ 00:05:05.194 03:56:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.194 EAL: Detected CPU lcores: 112 00:05:05.194 EAL: Detected NUMA nodes: 2 00:05:05.194 EAL: Detected shared linkage of DPDK 00:05:05.194 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.194 EAL: Selected IOVA mode 'VA' 00:05:05.194 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.194 EAL: VFIO support initialized 00:05:05.194 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.194 00:05:05.194 00:05:05.194 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.194 http://cunit.sourceforge.net/ 00:05:05.194 00:05:05.194 00:05:05.194 Suite: memory 00:05:05.194 Test: test ... 00:05:05.194 register 0x200000200000 2097152 00:05:05.194 malloc 3145728 00:05:05.194 register 0x200000400000 4194304 00:05:05.194 buf 0x200000500000 len 3145728 PASSED 00:05:05.194 malloc 64 00:05:05.194 buf 0x2000004fff40 len 64 PASSED 00:05:05.194 malloc 4194304 00:05:05.194 register 0x200000800000 6291456 00:05:05.194 buf 0x200000a00000 len 4194304 PASSED 00:05:05.194 free 0x200000500000 3145728 00:05:05.194 free 0x2000004fff40 64 00:05:05.194 unregister 0x200000400000 4194304 PASSED 00:05:05.194 free 0x200000a00000 4194304 00:05:05.194 unregister 0x200000800000 6291456 PASSED 00:05:05.194 malloc 8388608 00:05:05.194 register 0x200000400000 10485760 00:05:05.194 buf 0x200000600000 len 8388608 PASSED 00:05:05.194 free 0x200000600000 8388608 00:05:05.194 unregister 0x200000400000 10485760 PASSED 00:05:05.194 passed 00:05:05.194 00:05:05.194 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.194 suites 1 1 n/a 0 0 00:05:05.194 tests 1 1 1 0 0 00:05:05.194 asserts 15 15 15 0 n/a 00:05:05.194 00:05:05.194 Elapsed time = 0.009 seconds 00:05:05.194 00:05:05.194 real 0m0.060s 00:05:05.194 user 0m0.019s 00:05:05.194 sys 0m0.041s 00:05:05.194 03:56:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.194 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.194 ************************************ 00:05:05.194 END TEST env_mem_callbacks 00:05:05.194 ************************************ 00:05:05.194 00:05:05.194 real 0m8.879s 00:05:05.194 user 0m6.623s 00:05:05.194 sys 0m1.241s 00:05:05.194 03:56:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.194 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.194 ************************************ 00:05:05.194 END TEST env 00:05:05.194 ************************************ 00:05:05.454 03:56:19 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:05.454 03:56:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.454 03:56:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.454 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.454 ************************************ 00:05:05.454 START TEST rpc 00:05:05.454 ************************************ 00:05:05.454 03:56:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:05.454 * Looking for test storage... 00:05:05.454 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:05.454 03:56:19 -- rpc/rpc.sh@65 -- # spdk_pid=122545 00:05:05.454 03:56:19 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.454 03:56:19 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:05.454 03:56:19 -- rpc/rpc.sh@67 -- # waitforlisten 122545 00:05:05.454 03:56:19 -- common/autotest_common.sh@817 -- # '[' -z 122545 ']' 00:05:05.454 03:56:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.454 03:56:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:05.454 03:56:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.454 03:56:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:05.454 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.714 [2024-04-19 03:56:20.017716] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:05.714 [2024-04-19 03:56:20.017769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122545 ] 00:05:05.714 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.714 [2024-04-19 03:56:20.085087] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.714 [2024-04-19 03:56:20.158905] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:05.714 [2024-04-19 03:56:20.158944] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 122545' to capture a snapshot of events at runtime. 00:05:05.714 [2024-04-19 03:56:20.158954] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:05.714 [2024-04-19 03:56:20.158961] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:05.714 [2024-04-19 03:56:20.158968] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid122545 for offline analysis/debug. 00:05:05.714 [2024-04-19 03:56:20.158987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.283 03:56:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:06.283 03:56:20 -- common/autotest_common.sh@850 -- # return 0 00:05:06.283 03:56:20 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:06.283 03:56:20 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:06.283 03:56:20 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:06.283 03:56:20 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:06.283 03:56:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.283 03:56:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.283 03:56:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.543 ************************************ 00:05:06.543 START TEST rpc_integrity 00:05:06.543 ************************************ 00:05:06.543 03:56:20 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:06.543 03:56:20 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:06.543 03:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:06.543 03:56:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.543 03:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:06.543 03:56:20 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:06.543 03:56:20 -- rpc/rpc.sh@13 -- # jq length 00:05:06.543 03:56:20 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:06.543 03:56:20 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:06.543 03:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:06.543 03:56:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.543 03:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:06.543 03:56:20 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:06.543 03:56:20 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:06.543 03:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:06.543 03:56:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.543 03:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:06.543 03:56:20 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:06.543 { 00:05:06.543 "name": "Malloc0", 00:05:06.543 "aliases": [ 00:05:06.543 "3a19f578-34cb-4f16-b878-fe5a3ed5d030" 00:05:06.543 ], 00:05:06.543 "product_name": "Malloc disk", 00:05:06.543 "block_size": 512, 00:05:06.543 "num_blocks": 16384, 00:05:06.543 "uuid": "3a19f578-34cb-4f16-b878-fe5a3ed5d030", 00:05:06.543 "assigned_rate_limits": { 00:05:06.543 "rw_ios_per_sec": 0, 00:05:06.543 "rw_mbytes_per_sec": 0, 00:05:06.543 "r_mbytes_per_sec": 0, 00:05:06.543 "w_mbytes_per_sec": 0 00:05:06.543 }, 00:05:06.543 "claimed": false, 00:05:06.543 "zoned": false, 00:05:06.543 "supported_io_types": { 00:05:06.543 "read": true, 00:05:06.543 "write": true, 00:05:06.543 "unmap": true, 00:05:06.543 "write_zeroes": true, 00:05:06.543 "flush": true, 00:05:06.543 "reset": true, 00:05:06.543 "compare": false, 00:05:06.543 "compare_and_write": false, 00:05:06.543 "abort": true, 00:05:06.543 "nvme_admin": false, 00:05:06.543 "nvme_io": false 00:05:06.543 }, 00:05:06.543 "memory_domains": [ 00:05:06.543 { 00:05:06.543 "dma_device_id": "system", 00:05:06.543 "dma_device_type": 1 00:05:06.543 }, 00:05:06.543 { 00:05:06.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.543 "dma_device_type": 2 00:05:06.543 } 00:05:06.543 ], 00:05:06.543 "driver_specific": {} 00:05:06.543 } 00:05:06.543 ]' 00:05:06.543 03:56:20 -- rpc/rpc.sh@17 -- # jq length 00:05:06.544 03:56:21 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:06.544 03:56:21 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:06.544 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:06.544 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:06.544 [2024-04-19 03:56:21.030003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:06.544 [2024-04-19 03:56:21.030031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:06.544 [2024-04-19 03:56:21.030045] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17ea600 00:05:06.544 [2024-04-19 03:56:21.030053] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:06.544 [2024-04-19 03:56:21.031102] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:06.544 [2024-04-19 03:56:21.031124] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:06.544 Passthru0 00:05:06.544 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:06.544 03:56:21 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:06.544 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:06.544 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:06.544 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:06.544 03:56:21 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:06.544 { 00:05:06.544 "name": "Malloc0", 00:05:06.544 "aliases": [ 00:05:06.544 "3a19f578-34cb-4f16-b878-fe5a3ed5d030" 00:05:06.544 ], 00:05:06.544 "product_name": "Malloc disk", 00:05:06.544 "block_size": 512, 00:05:06.544 "num_blocks": 16384, 00:05:06.544 "uuid": "3a19f578-34cb-4f16-b878-fe5a3ed5d030", 00:05:06.544 "assigned_rate_limits": { 00:05:06.544 "rw_ios_per_sec": 0, 00:05:06.544 "rw_mbytes_per_sec": 0, 00:05:06.544 "r_mbytes_per_sec": 0, 00:05:06.544 "w_mbytes_per_sec": 0 00:05:06.544 }, 00:05:06.544 "claimed": true, 00:05:06.544 "claim_type": "exclusive_write", 00:05:06.544 "zoned": false, 00:05:06.544 "supported_io_types": { 00:05:06.544 "read": true, 00:05:06.544 "write": true, 00:05:06.544 "unmap": true, 00:05:06.544 "write_zeroes": true, 00:05:06.544 "flush": true, 00:05:06.544 "reset": true, 00:05:06.544 "compare": false, 00:05:06.544 "compare_and_write": false, 00:05:06.544 "abort": true, 00:05:06.544 "nvme_admin": false, 00:05:06.544 "nvme_io": false 00:05:06.544 }, 00:05:06.544 "memory_domains": [ 00:05:06.544 { 00:05:06.544 "dma_device_id": "system", 00:05:06.544 "dma_device_type": 1 00:05:06.544 }, 00:05:06.544 { 00:05:06.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.544 "dma_device_type": 2 00:05:06.544 } 00:05:06.544 ], 00:05:06.544 "driver_specific": {} 00:05:06.544 }, 00:05:06.544 { 00:05:06.544 "name": "Passthru0", 00:05:06.544 "aliases": [ 00:05:06.544 "3b1778b3-ba96-5fda-b29d-75d10f1eee63" 00:05:06.544 ], 00:05:06.544 "product_name": "passthru", 00:05:06.544 "block_size": 512, 00:05:06.544 "num_blocks": 16384, 00:05:06.544 "uuid": "3b1778b3-ba96-5fda-b29d-75d10f1eee63", 00:05:06.544 "assigned_rate_limits": { 00:05:06.544 "rw_ios_per_sec": 0, 00:05:06.544 "rw_mbytes_per_sec": 0, 00:05:06.544 "r_mbytes_per_sec": 0, 00:05:06.544 "w_mbytes_per_sec": 0 00:05:06.544 }, 00:05:06.544 "claimed": false, 00:05:06.544 "zoned": false, 00:05:06.544 "supported_io_types": { 00:05:06.544 "read": true, 00:05:06.544 "write": true, 00:05:06.544 "unmap": true, 00:05:06.544 "write_zeroes": true, 00:05:06.544 "flush": true, 00:05:06.544 "reset": true, 00:05:06.544 "compare": false, 00:05:06.544 "compare_and_write": false, 00:05:06.544 "abort": true, 00:05:06.544 "nvme_admin": false, 00:05:06.544 "nvme_io": false 00:05:06.544 }, 00:05:06.544 "memory_domains": [ 00:05:06.544 { 00:05:06.544 "dma_device_id": "system", 00:05:06.544 "dma_device_type": 1 00:05:06.544 }, 00:05:06.544 { 00:05:06.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.544 "dma_device_type": 2 00:05:06.544 } 00:05:06.544 ], 00:05:06.544 "driver_specific": { 00:05:06.544 "passthru": { 00:05:06.544 "name": "Passthru0", 00:05:06.544 "base_bdev_name": "Malloc0" 00:05:06.544 } 00:05:06.544 } 00:05:06.544 } 00:05:06.544 ]' 00:05:06.544 03:56:21 -- rpc/rpc.sh@21 -- # jq length 00:05:06.803 03:56:21 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:06.803 03:56:21 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:06.803 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:06.803 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:06.803 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:06.803 03:56:21 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:06.803 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:06.803 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:06.803 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:06.803 03:56:21 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:06.803 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:06.803 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:06.803 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:06.803 03:56:21 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:06.803 03:56:21 -- rpc/rpc.sh@26 -- # jq length 00:05:06.803 03:56:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:06.803 00:05:06.803 real 0m0.266s 00:05:06.803 user 0m0.168s 00:05:06.803 sys 0m0.035s 00:05:06.803 03:56:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.803 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:06.803 ************************************ 00:05:06.803 END TEST rpc_integrity 00:05:06.803 ************************************ 00:05:06.803 03:56:21 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:06.803 03:56:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.803 03:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.803 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:06.803 ************************************ 00:05:06.803 START TEST rpc_plugins 00:05:06.803 ************************************ 00:05:06.803 03:56:21 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:05:06.803 03:56:21 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:06.803 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:06.803 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.063 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.063 03:56:21 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:07.063 03:56:21 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:07.063 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.063 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.063 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.063 03:56:21 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:07.063 { 00:05:07.063 "name": "Malloc1", 00:05:07.063 "aliases": [ 00:05:07.063 "0b7d7f03-85e8-4e04-b9d9-1084c05e7ee5" 00:05:07.063 ], 00:05:07.063 "product_name": "Malloc disk", 00:05:07.063 "block_size": 4096, 00:05:07.063 "num_blocks": 256, 00:05:07.063 "uuid": "0b7d7f03-85e8-4e04-b9d9-1084c05e7ee5", 00:05:07.063 "assigned_rate_limits": { 00:05:07.063 "rw_ios_per_sec": 0, 00:05:07.063 "rw_mbytes_per_sec": 0, 00:05:07.063 "r_mbytes_per_sec": 0, 00:05:07.063 "w_mbytes_per_sec": 0 00:05:07.063 }, 00:05:07.063 "claimed": false, 00:05:07.063 "zoned": false, 00:05:07.063 "supported_io_types": { 00:05:07.063 "read": true, 00:05:07.063 "write": true, 00:05:07.063 "unmap": true, 00:05:07.063 "write_zeroes": true, 00:05:07.063 "flush": true, 00:05:07.063 "reset": true, 00:05:07.063 "compare": false, 00:05:07.063 "compare_and_write": false, 00:05:07.063 "abort": true, 00:05:07.063 "nvme_admin": false, 00:05:07.063 "nvme_io": false 00:05:07.063 }, 00:05:07.063 "memory_domains": [ 00:05:07.063 { 00:05:07.063 "dma_device_id": "system", 00:05:07.063 "dma_device_type": 1 00:05:07.063 }, 00:05:07.063 { 00:05:07.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.063 "dma_device_type": 2 00:05:07.063 } 00:05:07.063 ], 00:05:07.063 "driver_specific": {} 00:05:07.063 } 00:05:07.063 ]' 00:05:07.063 03:56:21 -- rpc/rpc.sh@32 -- # jq length 00:05:07.063 03:56:21 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:07.063 03:56:21 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:07.063 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.063 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.063 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.063 03:56:21 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:07.063 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.063 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.063 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.063 03:56:21 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:07.063 03:56:21 -- rpc/rpc.sh@36 -- # jq length 00:05:07.063 03:56:21 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:07.063 00:05:07.063 real 0m0.140s 00:05:07.063 user 0m0.087s 00:05:07.063 sys 0m0.016s 00:05:07.063 03:56:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.063 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.064 ************************************ 00:05:07.064 END TEST rpc_plugins 00:05:07.064 ************************************ 00:05:07.064 03:56:21 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:07.064 03:56:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.064 03:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.064 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.324 ************************************ 00:05:07.324 START TEST rpc_trace_cmd_test 00:05:07.324 ************************************ 00:05:07.324 03:56:21 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:05:07.324 03:56:21 -- rpc/rpc.sh@40 -- # local info 00:05:07.324 03:56:21 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:07.324 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.324 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.324 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.324 03:56:21 -- rpc/rpc.sh@42 -- # info='{ 00:05:07.324 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid122545", 00:05:07.324 "tpoint_group_mask": "0x8", 00:05:07.324 "iscsi_conn": { 00:05:07.324 "mask": "0x2", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "scsi": { 00:05:07.324 "mask": "0x4", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "bdev": { 00:05:07.324 "mask": "0x8", 00:05:07.324 "tpoint_mask": "0xffffffffffffffff" 00:05:07.324 }, 00:05:07.324 "nvmf_rdma": { 00:05:07.324 "mask": "0x10", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "nvmf_tcp": { 00:05:07.324 "mask": "0x20", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "ftl": { 00:05:07.324 "mask": "0x40", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "blobfs": { 00:05:07.324 "mask": "0x80", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "dsa": { 00:05:07.324 "mask": "0x200", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "thread": { 00:05:07.324 "mask": "0x400", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "nvme_pcie": { 00:05:07.324 "mask": "0x800", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "iaa": { 00:05:07.324 "mask": "0x1000", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "nvme_tcp": { 00:05:07.324 "mask": "0x2000", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "bdev_nvme": { 00:05:07.324 "mask": "0x4000", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 }, 00:05:07.324 "sock": { 00:05:07.324 "mask": "0x8000", 00:05:07.324 "tpoint_mask": "0x0" 00:05:07.324 } 00:05:07.324 }' 00:05:07.324 03:56:21 -- rpc/rpc.sh@43 -- # jq length 00:05:07.324 03:56:21 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:07.324 03:56:21 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:07.324 03:56:21 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:07.324 03:56:21 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:07.324 03:56:21 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:07.324 03:56:21 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:07.324 03:56:21 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:07.324 03:56:21 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:07.324 03:56:21 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:07.324 00:05:07.324 real 0m0.181s 00:05:07.324 user 0m0.150s 00:05:07.324 sys 0m0.024s 00:05:07.324 03:56:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.324 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.324 ************************************ 00:05:07.324 END TEST rpc_trace_cmd_test 00:05:07.324 ************************************ 00:05:07.324 03:56:21 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:07.324 03:56:21 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:07.324 03:56:21 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:07.324 03:56:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.324 03:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.324 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.584 ************************************ 00:05:07.584 START TEST rpc_daemon_integrity 00:05:07.584 ************************************ 00:05:07.584 03:56:21 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:07.584 03:56:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.584 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.584 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.584 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.584 03:56:21 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:07.584 03:56:21 -- rpc/rpc.sh@13 -- # jq length 00:05:07.584 03:56:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:07.584 03:56:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.584 03:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.584 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.584 03:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.584 03:56:22 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:07.584 03:56:22 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:07.584 03:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.584 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.584 03:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.584 03:56:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:07.584 { 00:05:07.584 "name": "Malloc2", 00:05:07.584 "aliases": [ 00:05:07.584 "35d0fb9f-dba7-4222-9971-1950d19c9b02" 00:05:07.584 ], 00:05:07.584 "product_name": "Malloc disk", 00:05:07.584 "block_size": 512, 00:05:07.584 "num_blocks": 16384, 00:05:07.584 "uuid": "35d0fb9f-dba7-4222-9971-1950d19c9b02", 00:05:07.584 "assigned_rate_limits": { 00:05:07.584 "rw_ios_per_sec": 0, 00:05:07.584 "rw_mbytes_per_sec": 0, 00:05:07.584 "r_mbytes_per_sec": 0, 00:05:07.584 "w_mbytes_per_sec": 0 00:05:07.584 }, 00:05:07.584 "claimed": false, 00:05:07.584 "zoned": false, 00:05:07.584 "supported_io_types": { 00:05:07.584 "read": true, 00:05:07.584 "write": true, 00:05:07.584 "unmap": true, 00:05:07.584 "write_zeroes": true, 00:05:07.584 "flush": true, 00:05:07.584 "reset": true, 00:05:07.584 "compare": false, 00:05:07.584 "compare_and_write": false, 00:05:07.584 "abort": true, 00:05:07.584 "nvme_admin": false, 00:05:07.584 "nvme_io": false 00:05:07.584 }, 00:05:07.584 "memory_domains": [ 00:05:07.584 { 00:05:07.584 "dma_device_id": "system", 00:05:07.584 "dma_device_type": 1 00:05:07.584 }, 00:05:07.584 { 00:05:07.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.584 "dma_device_type": 2 00:05:07.584 } 00:05:07.584 ], 00:05:07.584 "driver_specific": {} 00:05:07.584 } 00:05:07.584 ]' 00:05:07.584 03:56:22 -- rpc/rpc.sh@17 -- # jq length 00:05:07.584 03:56:22 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:07.584 03:56:22 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:07.584 03:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.584 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.584 [2024-04-19 03:56:22.084849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:07.584 [2024-04-19 03:56:22.084877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:07.584 [2024-04-19 03:56:22.084894] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17ea110 00:05:07.584 [2024-04-19 03:56:22.084902] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.584 [2024-04-19 03:56:22.085824] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.585 [2024-04-19 03:56:22.085843] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.585 Passthru0 00:05:07.585 03:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.585 03:56:22 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.585 03:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.585 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.585 03:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.585 03:56:22 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.585 { 00:05:07.585 "name": "Malloc2", 00:05:07.585 "aliases": [ 00:05:07.585 "35d0fb9f-dba7-4222-9971-1950d19c9b02" 00:05:07.585 ], 00:05:07.585 "product_name": "Malloc disk", 00:05:07.585 "block_size": 512, 00:05:07.585 "num_blocks": 16384, 00:05:07.585 "uuid": "35d0fb9f-dba7-4222-9971-1950d19c9b02", 00:05:07.585 "assigned_rate_limits": { 00:05:07.585 "rw_ios_per_sec": 0, 00:05:07.585 "rw_mbytes_per_sec": 0, 00:05:07.585 "r_mbytes_per_sec": 0, 00:05:07.585 "w_mbytes_per_sec": 0 00:05:07.585 }, 00:05:07.585 "claimed": true, 00:05:07.585 "claim_type": "exclusive_write", 00:05:07.585 "zoned": false, 00:05:07.585 "supported_io_types": { 00:05:07.585 "read": true, 00:05:07.585 "write": true, 00:05:07.585 "unmap": true, 00:05:07.585 "write_zeroes": true, 00:05:07.585 "flush": true, 00:05:07.585 "reset": true, 00:05:07.585 "compare": false, 00:05:07.585 "compare_and_write": false, 00:05:07.585 "abort": true, 00:05:07.585 "nvme_admin": false, 00:05:07.585 "nvme_io": false 00:05:07.585 }, 00:05:07.585 "memory_domains": [ 00:05:07.585 { 00:05:07.585 "dma_device_id": "system", 00:05:07.585 "dma_device_type": 1 00:05:07.585 }, 00:05:07.585 { 00:05:07.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.585 "dma_device_type": 2 00:05:07.585 } 00:05:07.585 ], 00:05:07.585 "driver_specific": {} 00:05:07.585 }, 00:05:07.585 { 00:05:07.585 "name": "Passthru0", 00:05:07.585 "aliases": [ 00:05:07.585 "316ba5bc-b47b-5303-b670-fe7ac6d0ee70" 00:05:07.585 ], 00:05:07.585 "product_name": "passthru", 00:05:07.585 "block_size": 512, 00:05:07.585 "num_blocks": 16384, 00:05:07.585 "uuid": "316ba5bc-b47b-5303-b670-fe7ac6d0ee70", 00:05:07.585 "assigned_rate_limits": { 00:05:07.585 "rw_ios_per_sec": 0, 00:05:07.585 "rw_mbytes_per_sec": 0, 00:05:07.585 "r_mbytes_per_sec": 0, 00:05:07.585 "w_mbytes_per_sec": 0 00:05:07.585 }, 00:05:07.585 "claimed": false, 00:05:07.585 "zoned": false, 00:05:07.585 "supported_io_types": { 00:05:07.585 "read": true, 00:05:07.585 "write": true, 00:05:07.585 "unmap": true, 00:05:07.585 "write_zeroes": true, 00:05:07.585 "flush": true, 00:05:07.585 "reset": true, 00:05:07.585 "compare": false, 00:05:07.585 "compare_and_write": false, 00:05:07.585 "abort": true, 00:05:07.585 "nvme_admin": false, 00:05:07.585 "nvme_io": false 00:05:07.585 }, 00:05:07.585 "memory_domains": [ 00:05:07.585 { 00:05:07.585 "dma_device_id": "system", 00:05:07.585 "dma_device_type": 1 00:05:07.585 }, 00:05:07.585 { 00:05:07.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.585 "dma_device_type": 2 00:05:07.585 } 00:05:07.585 ], 00:05:07.585 "driver_specific": { 00:05:07.585 "passthru": { 00:05:07.585 "name": "Passthru0", 00:05:07.585 "base_bdev_name": "Malloc2" 00:05:07.585 } 00:05:07.585 } 00:05:07.585 } 00:05:07.585 ]' 00:05:07.585 03:56:22 -- rpc/rpc.sh@21 -- # jq length 00:05:07.845 03:56:22 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:07.845 03:56:22 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:07.845 03:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.845 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.845 03:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.845 03:56:22 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:07.845 03:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.845 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.845 03:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.845 03:56:22 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:07.845 03:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.845 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.845 03:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.845 03:56:22 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.845 03:56:22 -- rpc/rpc.sh@26 -- # jq length 00:05:07.845 03:56:22 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.845 00:05:07.845 real 0m0.265s 00:05:07.845 user 0m0.171s 00:05:07.845 sys 0m0.033s 00:05:07.845 03:56:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.845 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.845 ************************************ 00:05:07.845 END TEST rpc_daemon_integrity 00:05:07.845 ************************************ 00:05:07.845 03:56:22 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:07.845 03:56:22 -- rpc/rpc.sh@84 -- # killprocess 122545 00:05:07.845 03:56:22 -- common/autotest_common.sh@936 -- # '[' -z 122545 ']' 00:05:07.845 03:56:22 -- common/autotest_common.sh@940 -- # kill -0 122545 00:05:07.845 03:56:22 -- common/autotest_common.sh@941 -- # uname 00:05:07.845 03:56:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:07.845 03:56:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122545 00:05:07.845 03:56:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:07.845 03:56:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:07.845 03:56:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122545' 00:05:07.845 killing process with pid 122545 00:05:07.845 03:56:22 -- common/autotest_common.sh@955 -- # kill 122545 00:05:07.845 03:56:22 -- common/autotest_common.sh@960 -- # wait 122545 00:05:08.105 00:05:08.105 real 0m2.744s 00:05:08.105 user 0m3.549s 00:05:08.105 sys 0m0.793s 00:05:08.105 03:56:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.105 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:08.105 ************************************ 00:05:08.105 END TEST rpc 00:05:08.105 ************************************ 00:05:08.365 03:56:22 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:08.365 03:56:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.365 03:56:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.365 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:08.365 ************************************ 00:05:08.365 START TEST skip_rpc 00:05:08.365 ************************************ 00:05:08.365 03:56:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:08.365 * Looking for test storage... 00:05:08.365 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:08.365 03:56:22 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:08.365 03:56:22 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:08.365 03:56:22 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:08.365 03:56:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.365 03:56:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.365 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:08.625 ************************************ 00:05:08.625 START TEST skip_rpc 00:05:08.625 ************************************ 00:05:08.625 03:56:23 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:08.625 03:56:23 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=123393 00:05:08.625 03:56:23 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.625 03:56:23 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:08.625 03:56:23 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:08.625 [2024-04-19 03:56:23.056921] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:08.625 [2024-04-19 03:56:23.056954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123393 ] 00:05:08.625 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.625 [2024-04-19 03:56:23.122871] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.884 [2024-04-19 03:56:23.190134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.165 03:56:28 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:14.165 03:56:28 -- common/autotest_common.sh@638 -- # local es=0 00:05:14.165 03:56:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:14.165 03:56:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:14.165 03:56:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:14.165 03:56:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:14.165 03:56:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:14.165 03:56:28 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:14.165 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.165 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.165 03:56:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:14.165 03:56:28 -- common/autotest_common.sh@641 -- # es=1 00:05:14.165 03:56:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:14.165 03:56:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:14.165 03:56:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:14.165 03:56:28 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:14.165 03:56:28 -- rpc/skip_rpc.sh@23 -- # killprocess 123393 00:05:14.165 03:56:28 -- common/autotest_common.sh@936 -- # '[' -z 123393 ']' 00:05:14.165 03:56:28 -- common/autotest_common.sh@940 -- # kill -0 123393 00:05:14.165 03:56:28 -- common/autotest_common.sh@941 -- # uname 00:05:14.165 03:56:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.165 03:56:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123393 00:05:14.165 03:56:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.165 03:56:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.165 03:56:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123393' 00:05:14.165 killing process with pid 123393 00:05:14.165 03:56:28 -- common/autotest_common.sh@955 -- # kill 123393 00:05:14.165 03:56:28 -- common/autotest_common.sh@960 -- # wait 123393 00:05:14.165 00:05:14.165 real 0m5.377s 00:05:14.165 user 0m5.142s 00:05:14.165 sys 0m0.266s 00:05:14.165 03:56:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.165 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.165 ************************************ 00:05:14.165 END TEST skip_rpc 00:05:14.165 ************************************ 00:05:14.165 03:56:28 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:14.165 03:56:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.165 03:56:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.165 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.165 ************************************ 00:05:14.165 START TEST skip_rpc_with_json 00:05:14.165 ************************************ 00:05:14.165 03:56:28 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:14.165 03:56:28 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:14.165 03:56:28 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=124372 00:05:14.165 03:56:28 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.165 03:56:28 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.165 03:56:28 -- rpc/skip_rpc.sh@31 -- # waitforlisten 124372 00:05:14.165 03:56:28 -- common/autotest_common.sh@817 -- # '[' -z 124372 ']' 00:05:14.165 03:56:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.165 03:56:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.165 03:56:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.165 03:56:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.165 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.165 [2024-04-19 03:56:28.610922] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:14.165 [2024-04-19 03:56:28.610978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124372 ] 00:05:14.165 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.165 [2024-04-19 03:56:28.676970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.425 [2024-04-19 03:56:28.751065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.995 03:56:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:14.995 03:56:29 -- common/autotest_common.sh@850 -- # return 0 00:05:14.995 03:56:29 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:14.995 03:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.995 03:56:29 -- common/autotest_common.sh@10 -- # set +x 00:05:14.995 [2024-04-19 03:56:29.386016] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:14.995 request: 00:05:14.995 { 00:05:14.995 "trtype": "tcp", 00:05:14.995 "method": "nvmf_get_transports", 00:05:14.995 "req_id": 1 00:05:14.995 } 00:05:14.995 Got JSON-RPC error response 00:05:14.995 response: 00:05:14.995 { 00:05:14.995 "code": -19, 00:05:14.995 "message": "No such device" 00:05:14.995 } 00:05:14.995 03:56:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:14.995 03:56:29 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:14.995 03:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.995 03:56:29 -- common/autotest_common.sh@10 -- # set +x 00:05:14.995 [2024-04-19 03:56:29.394101] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.995 03:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.995 03:56:29 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:14.995 03:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.995 03:56:29 -- common/autotest_common.sh@10 -- # set +x 00:05:15.255 03:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:15.255 03:56:29 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:15.255 { 00:05:15.255 "subsystems": [ 00:05:15.255 { 00:05:15.255 "subsystem": "keyring", 00:05:15.255 "config": [] 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "subsystem": "iobuf", 00:05:15.255 "config": [ 00:05:15.255 { 00:05:15.255 "method": "iobuf_set_options", 00:05:15.255 "params": { 00:05:15.255 "small_pool_count": 8192, 00:05:15.255 "large_pool_count": 1024, 00:05:15.255 "small_bufsize": 8192, 00:05:15.255 "large_bufsize": 135168 00:05:15.255 } 00:05:15.255 } 00:05:15.255 ] 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "subsystem": "sock", 00:05:15.255 "config": [ 00:05:15.255 { 00:05:15.255 "method": "sock_impl_set_options", 00:05:15.255 "params": { 00:05:15.255 "impl_name": "posix", 00:05:15.255 "recv_buf_size": 2097152, 00:05:15.255 "send_buf_size": 2097152, 00:05:15.255 "enable_recv_pipe": true, 00:05:15.255 "enable_quickack": false, 00:05:15.255 "enable_placement_id": 0, 00:05:15.255 "enable_zerocopy_send_server": true, 00:05:15.255 "enable_zerocopy_send_client": false, 00:05:15.255 "zerocopy_threshold": 0, 00:05:15.255 "tls_version": 0, 00:05:15.255 "enable_ktls": false 00:05:15.255 } 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "method": "sock_impl_set_options", 00:05:15.255 "params": { 00:05:15.255 "impl_name": "ssl", 00:05:15.255 "recv_buf_size": 4096, 00:05:15.255 "send_buf_size": 4096, 00:05:15.255 "enable_recv_pipe": true, 00:05:15.255 "enable_quickack": false, 00:05:15.255 "enable_placement_id": 0, 00:05:15.255 "enable_zerocopy_send_server": true, 00:05:15.255 "enable_zerocopy_send_client": false, 00:05:15.255 "zerocopy_threshold": 0, 00:05:15.255 "tls_version": 0, 00:05:15.255 "enable_ktls": false 00:05:15.255 } 00:05:15.255 } 00:05:15.255 ] 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "subsystem": "vmd", 00:05:15.255 "config": [] 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "subsystem": "accel", 00:05:15.255 "config": [ 00:05:15.255 { 00:05:15.255 "method": "accel_set_options", 00:05:15.255 "params": { 00:05:15.255 "small_cache_size": 128, 00:05:15.255 "large_cache_size": 16, 00:05:15.255 "task_count": 2048, 00:05:15.255 "sequence_count": 2048, 00:05:15.255 "buf_count": 2048 00:05:15.255 } 00:05:15.255 } 00:05:15.255 ] 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "subsystem": "bdev", 00:05:15.255 "config": [ 00:05:15.255 { 00:05:15.255 "method": "bdev_set_options", 00:05:15.255 "params": { 00:05:15.255 "bdev_io_pool_size": 65535, 00:05:15.255 "bdev_io_cache_size": 256, 00:05:15.255 "bdev_auto_examine": true, 00:05:15.255 "iobuf_small_cache_size": 128, 00:05:15.255 "iobuf_large_cache_size": 16 00:05:15.255 } 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "method": "bdev_raid_set_options", 00:05:15.255 "params": { 00:05:15.255 "process_window_size_kb": 1024 00:05:15.255 } 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "method": "bdev_iscsi_set_options", 00:05:15.255 "params": { 00:05:15.255 "timeout_sec": 30 00:05:15.255 } 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "method": "bdev_nvme_set_options", 00:05:15.255 "params": { 00:05:15.255 "action_on_timeout": "none", 00:05:15.255 "timeout_us": 0, 00:05:15.255 "timeout_admin_us": 0, 00:05:15.255 "keep_alive_timeout_ms": 10000, 00:05:15.255 "arbitration_burst": 0, 00:05:15.255 "low_priority_weight": 0, 00:05:15.255 "medium_priority_weight": 0, 00:05:15.255 "high_priority_weight": 0, 00:05:15.255 "nvme_adminq_poll_period_us": 10000, 00:05:15.255 "nvme_ioq_poll_period_us": 0, 00:05:15.255 "io_queue_requests": 0, 00:05:15.255 "delay_cmd_submit": true, 00:05:15.255 "transport_retry_count": 4, 00:05:15.255 "bdev_retry_count": 3, 00:05:15.255 "transport_ack_timeout": 0, 00:05:15.255 "ctrlr_loss_timeout_sec": 0, 00:05:15.255 "reconnect_delay_sec": 0, 00:05:15.255 "fast_io_fail_timeout_sec": 0, 00:05:15.255 "disable_auto_failback": false, 00:05:15.255 "generate_uuids": false, 00:05:15.255 "transport_tos": 0, 00:05:15.255 "nvme_error_stat": false, 00:05:15.255 "rdma_srq_size": 0, 00:05:15.255 "io_path_stat": false, 00:05:15.255 "allow_accel_sequence": false, 00:05:15.255 "rdma_max_cq_size": 0, 00:05:15.255 "rdma_cm_event_timeout_ms": 0, 00:05:15.255 "dhchap_digests": [ 00:05:15.255 "sha256", 00:05:15.255 "sha384", 00:05:15.255 "sha512" 00:05:15.255 ], 00:05:15.255 "dhchap_dhgroups": [ 00:05:15.255 "null", 00:05:15.255 "ffdhe2048", 00:05:15.255 "ffdhe3072", 00:05:15.255 "ffdhe4096", 00:05:15.255 "ffdhe6144", 00:05:15.255 "ffdhe8192" 00:05:15.255 ] 00:05:15.255 } 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "method": "bdev_nvme_set_hotplug", 00:05:15.255 "params": { 00:05:15.255 "period_us": 100000, 00:05:15.255 "enable": false 00:05:15.255 } 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "method": "bdev_wait_for_examine" 00:05:15.255 } 00:05:15.255 ] 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "subsystem": "scsi", 00:05:15.255 "config": null 00:05:15.255 }, 00:05:15.255 { 00:05:15.255 "subsystem": "scheduler", 00:05:15.255 "config": [ 00:05:15.255 { 00:05:15.255 "method": "framework_set_scheduler", 00:05:15.255 "params": { 00:05:15.255 "name": "static" 00:05:15.255 } 00:05:15.255 } 00:05:15.255 ] 00:05:15.255 }, 00:05:15.255 { 00:05:15.256 "subsystem": "vhost_scsi", 00:05:15.256 "config": [] 00:05:15.256 }, 00:05:15.256 { 00:05:15.256 "subsystem": "vhost_blk", 00:05:15.256 "config": [] 00:05:15.256 }, 00:05:15.256 { 00:05:15.256 "subsystem": "ublk", 00:05:15.256 "config": [] 00:05:15.256 }, 00:05:15.256 { 00:05:15.256 "subsystem": "nbd", 00:05:15.256 "config": [] 00:05:15.256 }, 00:05:15.256 { 00:05:15.256 "subsystem": "nvmf", 00:05:15.256 "config": [ 00:05:15.256 { 00:05:15.256 "method": "nvmf_set_config", 00:05:15.256 "params": { 00:05:15.256 "discovery_filter": "match_any", 00:05:15.256 "admin_cmd_passthru": { 00:05:15.256 "identify_ctrlr": false 00:05:15.256 } 00:05:15.256 } 00:05:15.256 }, 00:05:15.256 { 00:05:15.256 "method": "nvmf_set_max_subsystems", 00:05:15.256 "params": { 00:05:15.256 "max_subsystems": 1024 00:05:15.256 } 00:05:15.256 }, 00:05:15.256 { 00:05:15.256 "method": "nvmf_set_crdt", 00:05:15.256 "params": { 00:05:15.256 "crdt1": 0, 00:05:15.256 "crdt2": 0, 00:05:15.256 "crdt3": 0 00:05:15.256 } 00:05:15.256 }, 00:05:15.256 { 00:05:15.256 "method": "nvmf_create_transport", 00:05:15.256 "params": { 00:05:15.256 "trtype": "TCP", 00:05:15.256 "max_queue_depth": 128, 00:05:15.256 "max_io_qpairs_per_ctrlr": 127, 00:05:15.256 "in_capsule_data_size": 4096, 00:05:15.256 "max_io_size": 131072, 00:05:15.256 "io_unit_size": 131072, 00:05:15.256 "max_aq_depth": 128, 00:05:15.256 "num_shared_buffers": 511, 00:05:15.256 "buf_cache_size": 4294967295, 00:05:15.256 "dif_insert_or_strip": false, 00:05:15.256 "zcopy": false, 00:05:15.256 "c2h_success": true, 00:05:15.256 "sock_priority": 0, 00:05:15.256 "abort_timeout_sec": 1, 00:05:15.256 "ack_timeout": 0 00:05:15.256 } 00:05:15.256 } 00:05:15.256 ] 00:05:15.256 }, 00:05:15.256 { 00:05:15.256 "subsystem": "iscsi", 00:05:15.256 "config": [ 00:05:15.256 { 00:05:15.256 "method": "iscsi_set_options", 00:05:15.256 "params": { 00:05:15.256 "node_base": "iqn.2016-06.io.spdk", 00:05:15.256 "max_sessions": 128, 00:05:15.256 "max_connections_per_session": 2, 00:05:15.256 "max_queue_depth": 64, 00:05:15.256 "default_time2wait": 2, 00:05:15.256 "default_time2retain": 20, 00:05:15.256 "first_burst_length": 8192, 00:05:15.256 "immediate_data": true, 00:05:15.256 "allow_duplicated_isid": false, 00:05:15.256 "error_recovery_level": 0, 00:05:15.256 "nop_timeout": 60, 00:05:15.256 "nop_in_interval": 30, 00:05:15.256 "disable_chap": false, 00:05:15.256 "require_chap": false, 00:05:15.256 "mutual_chap": false, 00:05:15.256 "chap_group": 0, 00:05:15.256 "max_large_datain_per_connection": 64, 00:05:15.256 "max_r2t_per_connection": 4, 00:05:15.256 "pdu_pool_size": 36864, 00:05:15.256 "immediate_data_pool_size": 16384, 00:05:15.256 "data_out_pool_size": 2048 00:05:15.256 } 00:05:15.256 } 00:05:15.256 ] 00:05:15.256 } 00:05:15.256 ] 00:05:15.256 } 00:05:15.256 03:56:29 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:15.256 03:56:29 -- rpc/skip_rpc.sh@40 -- # killprocess 124372 00:05:15.256 03:56:29 -- common/autotest_common.sh@936 -- # '[' -z 124372 ']' 00:05:15.256 03:56:29 -- common/autotest_common.sh@940 -- # kill -0 124372 00:05:15.256 03:56:29 -- common/autotest_common.sh@941 -- # uname 00:05:15.256 03:56:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.256 03:56:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124372 00:05:15.256 03:56:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:15.256 03:56:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:15.256 03:56:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124372' 00:05:15.256 killing process with pid 124372 00:05:15.256 03:56:29 -- common/autotest_common.sh@955 -- # kill 124372 00:05:15.256 03:56:29 -- common/autotest_common.sh@960 -- # wait 124372 00:05:15.516 03:56:29 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=124628 00:05:15.516 03:56:29 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:15.516 03:56:29 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:20.810 03:56:34 -- rpc/skip_rpc.sh@50 -- # killprocess 124628 00:05:20.810 03:56:34 -- common/autotest_common.sh@936 -- # '[' -z 124628 ']' 00:05:20.810 03:56:34 -- common/autotest_common.sh@940 -- # kill -0 124628 00:05:20.810 03:56:34 -- common/autotest_common.sh@941 -- # uname 00:05:20.810 03:56:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.810 03:56:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124628 00:05:20.810 03:56:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.810 03:56:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.810 03:56:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124628' 00:05:20.810 killing process with pid 124628 00:05:20.810 03:56:34 -- common/autotest_common.sh@955 -- # kill 124628 00:05:20.810 03:56:34 -- common/autotest_common.sh@960 -- # wait 124628 00:05:20.810 03:56:35 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:20.810 03:56:35 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:20.810 00:05:20.810 real 0m6.728s 00:05:20.810 user 0m6.500s 00:05:20.810 sys 0m0.616s 00:05:20.810 03:56:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.810 03:56:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.810 ************************************ 00:05:20.810 END TEST skip_rpc_with_json 00:05:20.810 ************************************ 00:05:20.810 03:56:35 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:20.810 03:56:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.810 03:56:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.810 03:56:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.070 ************************************ 00:05:21.070 START TEST skip_rpc_with_delay 00:05:21.070 ************************************ 00:05:21.070 03:56:35 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:21.070 03:56:35 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.070 03:56:35 -- common/autotest_common.sh@638 -- # local es=0 00:05:21.070 03:56:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.070 03:56:35 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.070 03:56:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:21.070 03:56:35 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.070 03:56:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:21.070 03:56:35 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.070 03:56:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:21.070 03:56:35 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.070 03:56:35 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:21.070 03:56:35 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.070 [2024-04-19 03:56:35.499542] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:21.070 [2024-04-19 03:56:35.499609] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:21.070 03:56:35 -- common/autotest_common.sh@641 -- # es=1 00:05:21.070 03:56:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:21.070 03:56:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:21.070 03:56:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:21.070 00:05:21.070 real 0m0.062s 00:05:21.070 user 0m0.039s 00:05:21.070 sys 0m0.023s 00:05:21.070 03:56:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.070 03:56:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.070 ************************************ 00:05:21.070 END TEST skip_rpc_with_delay 00:05:21.070 ************************************ 00:05:21.070 03:56:35 -- rpc/skip_rpc.sh@77 -- # uname 00:05:21.070 03:56:35 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:21.070 03:56:35 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:21.071 03:56:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.071 03:56:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.071 03:56:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.330 ************************************ 00:05:21.330 START TEST exit_on_failed_rpc_init 00:05:21.330 ************************************ 00:05:21.330 03:56:35 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:21.330 03:56:35 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=125745 00:05:21.330 03:56:35 -- rpc/skip_rpc.sh@63 -- # waitforlisten 125745 00:05:21.330 03:56:35 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.330 03:56:35 -- common/autotest_common.sh@817 -- # '[' -z 125745 ']' 00:05:21.330 03:56:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.330 03:56:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:21.330 03:56:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.330 03:56:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:21.330 03:56:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.330 [2024-04-19 03:56:35.728047] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:21.330 [2024-04-19 03:56:35.728093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125745 ] 00:05:21.330 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.330 [2024-04-19 03:56:35.797287] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.590 [2024-04-19 03:56:35.871284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.160 03:56:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:22.160 03:56:36 -- common/autotest_common.sh@850 -- # return 0 00:05:22.160 03:56:36 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.160 03:56:36 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.160 03:56:36 -- common/autotest_common.sh@638 -- # local es=0 00:05:22.160 03:56:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.160 03:56:36 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.160 03:56:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:22.160 03:56:36 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.160 03:56:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:22.160 03:56:36 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.160 03:56:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:22.160 03:56:36 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.160 03:56:36 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:22.160 03:56:36 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.160 [2024-04-19 03:56:36.551144] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:22.160 [2024-04-19 03:56:36.551183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126010 ] 00:05:22.160 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.160 [2024-04-19 03:56:36.616671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.160 [2024-04-19 03:56:36.684001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.160 [2024-04-19 03:56:36.684063] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:22.160 [2024-04-19 03:56:36.684072] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:22.160 [2024-04-19 03:56:36.684077] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.420 03:56:36 -- common/autotest_common.sh@641 -- # es=234 00:05:22.420 03:56:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:22.420 03:56:36 -- common/autotest_common.sh@650 -- # es=106 00:05:22.420 03:56:36 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:22.420 03:56:36 -- common/autotest_common.sh@658 -- # es=1 00:05:22.420 03:56:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:22.420 03:56:36 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:22.420 03:56:36 -- rpc/skip_rpc.sh@70 -- # killprocess 125745 00:05:22.420 03:56:36 -- common/autotest_common.sh@936 -- # '[' -z 125745 ']' 00:05:22.420 03:56:36 -- common/autotest_common.sh@940 -- # kill -0 125745 00:05:22.420 03:56:36 -- common/autotest_common.sh@941 -- # uname 00:05:22.420 03:56:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:22.420 03:56:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125745 00:05:22.420 03:56:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:22.420 03:56:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:22.420 03:56:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125745' 00:05:22.420 killing process with pid 125745 00:05:22.420 03:56:36 -- common/autotest_common.sh@955 -- # kill 125745 00:05:22.420 03:56:36 -- common/autotest_common.sh@960 -- # wait 125745 00:05:22.680 00:05:22.680 real 0m1.456s 00:05:22.680 user 0m1.653s 00:05:22.680 sys 0m0.412s 00:05:22.680 03:56:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.680 03:56:37 -- common/autotest_common.sh@10 -- # set +x 00:05:22.680 ************************************ 00:05:22.680 END TEST exit_on_failed_rpc_init 00:05:22.680 ************************************ 00:05:22.680 03:56:37 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:22.680 00:05:22.680 real 0m14.385s 00:05:22.680 user 0m13.609s 00:05:22.680 sys 0m1.760s 00:05:22.680 03:56:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.680 03:56:37 -- common/autotest_common.sh@10 -- # set +x 00:05:22.680 ************************************ 00:05:22.680 END TEST skip_rpc 00:05:22.680 ************************************ 00:05:22.680 03:56:37 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:22.680 03:56:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.941 03:56:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.941 03:56:37 -- common/autotest_common.sh@10 -- # set +x 00:05:22.941 ************************************ 00:05:22.941 START TEST rpc_client 00:05:22.941 ************************************ 00:05:22.941 03:56:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:22.941 * Looking for test storage... 00:05:22.941 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:22.941 03:56:37 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:22.941 OK 00:05:22.941 03:56:37 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:23.202 00:05:23.202 real 0m0.118s 00:05:23.202 user 0m0.051s 00:05:23.202 sys 0m0.074s 00:05:23.202 03:56:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.202 03:56:37 -- common/autotest_common.sh@10 -- # set +x 00:05:23.202 ************************************ 00:05:23.202 END TEST rpc_client 00:05:23.202 ************************************ 00:05:23.202 03:56:37 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:23.202 03:56:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.202 03:56:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.202 03:56:37 -- common/autotest_common.sh@10 -- # set +x 00:05:23.202 ************************************ 00:05:23.202 START TEST json_config 00:05:23.202 ************************************ 00:05:23.202 03:56:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:23.202 03:56:37 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.202 03:56:37 -- nvmf/common.sh@7 -- # uname -s 00:05:23.202 03:56:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.202 03:56:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.202 03:56:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.202 03:56:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.202 03:56:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.202 03:56:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.202 03:56:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.202 03:56:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.202 03:56:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.202 03:56:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.202 03:56:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:05:23.202 03:56:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:05:23.202 03:56:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.202 03:56:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.202 03:56:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.202 03:56:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.202 03:56:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:23.202 03:56:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.202 03:56:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.202 03:56:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.202 03:56:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.202 03:56:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.202 03:56:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.202 03:56:37 -- paths/export.sh@5 -- # export PATH 00:05:23.202 03:56:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.202 03:56:37 -- nvmf/common.sh@47 -- # : 0 00:05:23.202 03:56:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:23.202 03:56:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:23.202 03:56:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.202 03:56:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.202 03:56:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.202 03:56:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:23.202 03:56:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:23.202 03:56:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:23.202 03:56:37 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:23.202 03:56:37 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:23.202 03:56:37 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:23.202 03:56:37 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:23.202 03:56:37 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:23.202 03:56:37 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:23.202 03:56:37 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:23.202 03:56:37 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:23.202 03:56:37 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:23.202 03:56:37 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:23.202 03:56:37 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:23.202 03:56:37 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:23.462 03:56:37 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:23.462 03:56:37 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:23.462 03:56:37 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.462 03:56:37 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:23.462 INFO: JSON configuration test init 00:05:23.462 03:56:37 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:23.462 03:56:37 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:23.462 03:56:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:23.462 03:56:37 -- common/autotest_common.sh@10 -- # set +x 00:05:23.462 03:56:37 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:23.462 03:56:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:23.462 03:56:37 -- common/autotest_common.sh@10 -- # set +x 00:05:23.462 03:56:37 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:23.462 03:56:37 -- json_config/common.sh@9 -- # local app=target 00:05:23.462 03:56:37 -- json_config/common.sh@10 -- # shift 00:05:23.462 03:56:37 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.462 03:56:37 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.462 03:56:37 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.462 03:56:37 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.462 03:56:37 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.462 03:56:37 -- json_config/common.sh@22 -- # app_pid["$app"]=126394 00:05:23.462 03:56:37 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.462 Waiting for target to run... 00:05:23.462 03:56:37 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:23.462 03:56:37 -- json_config/common.sh@25 -- # waitforlisten 126394 /var/tmp/spdk_tgt.sock 00:05:23.462 03:56:37 -- common/autotest_common.sh@817 -- # '[' -z 126394 ']' 00:05:23.462 03:56:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.462 03:56:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:23.462 03:56:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.462 03:56:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:23.462 03:56:37 -- common/autotest_common.sh@10 -- # set +x 00:05:23.462 [2024-04-19 03:56:37.789079] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:23.463 [2024-04-19 03:56:37.789122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126394 ] 00:05:23.463 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.722 [2024-04-19 03:56:38.208890] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.982 [2024-04-19 03:56:38.297295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.241 03:56:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.241 03:56:38 -- common/autotest_common.sh@850 -- # return 0 00:05:24.241 03:56:38 -- json_config/common.sh@26 -- # echo '' 00:05:24.241 00:05:24.241 03:56:38 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:24.242 03:56:38 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:24.242 03:56:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:24.242 03:56:38 -- common/autotest_common.sh@10 -- # set +x 00:05:24.242 03:56:38 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:24.242 03:56:38 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:24.242 03:56:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:24.242 03:56:38 -- common/autotest_common.sh@10 -- # set +x 00:05:24.242 03:56:38 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:24.242 03:56:38 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:24.242 03:56:38 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:27.538 03:56:41 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:27.538 03:56:41 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:27.538 03:56:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:27.538 03:56:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.538 03:56:41 -- json_config/json_config.sh@45 -- # local ret=0 00:05:27.538 03:56:41 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:27.538 03:56:41 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:27.538 03:56:41 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:27.538 03:56:41 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:27.538 03:56:41 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:27.538 03:56:41 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:27.538 03:56:41 -- json_config/json_config.sh@48 -- # local get_types 00:05:27.538 03:56:41 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:27.538 03:56:41 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:27.538 03:56:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:27.538 03:56:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.538 03:56:41 -- json_config/json_config.sh@55 -- # return 0 00:05:27.538 03:56:41 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:27.538 03:56:41 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:27.538 03:56:41 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:27.538 03:56:41 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:27.538 03:56:41 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:27.538 03:56:41 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:27.538 03:56:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:27.538 03:56:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.538 03:56:41 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:27.538 03:56:41 -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:27.538 03:56:41 -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:27.538 03:56:41 -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:27.538 03:56:41 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:05:27.538 03:56:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:27.538 03:56:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:27.538 03:56:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:27.538 03:56:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:27.538 03:56:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:27.538 03:56:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:27.538 03:56:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:27.538 03:56:41 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:05:27.538 03:56:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:27.538 03:56:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:27.538 03:56:41 -- common/autotest_common.sh@10 -- # set +x 00:05:32.816 03:56:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:32.816 03:56:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:32.816 03:56:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:32.817 03:56:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:32.817 03:56:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:32.817 03:56:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:32.817 03:56:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:32.817 03:56:47 -- nvmf/common.sh@295 -- # net_devs=() 00:05:32.817 03:56:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:32.817 03:56:47 -- nvmf/common.sh@296 -- # e810=() 00:05:32.817 03:56:47 -- nvmf/common.sh@296 -- # local -ga e810 00:05:32.817 03:56:47 -- nvmf/common.sh@297 -- # x722=() 00:05:32.817 03:56:47 -- nvmf/common.sh@297 -- # local -ga x722 00:05:32.817 03:56:47 -- nvmf/common.sh@298 -- # mlx=() 00:05:32.817 03:56:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:32.817 03:56:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:32.817 03:56:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:32.817 03:56:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:32.817 03:56:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:32.817 03:56:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:32.817 03:56:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:32.817 03:56:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:32.817 03:56:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:32.817 03:56:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:32.817 03:56:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:32.817 03:56:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:32.817 03:56:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:32.817 03:56:47 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:32.817 03:56:47 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:32.817 03:56:47 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:32.817 03:56:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:32.817 03:56:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:32.817 03:56:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:32.817 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:32.817 03:56:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:32.817 03:56:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:32.817 03:56:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:32.817 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:32.817 03:56:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:32.817 03:56:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:32.817 03:56:47 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:32.817 03:56:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:32.817 03:56:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:32.817 03:56:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:32.817 03:56:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:32.817 Found net devices under 0000:18:00.0: mlx_0_0 00:05:32.817 03:56:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:32.817 03:56:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:32.817 03:56:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:32.817 03:56:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:32.817 03:56:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:32.817 03:56:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:32.817 Found net devices under 0000:18:00.1: mlx_0_1 00:05:32.817 03:56:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:32.817 03:56:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:32.817 03:56:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:32.817 03:56:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:05:32.817 03:56:47 -- nvmf/common.sh@409 -- # rdma_device_init 00:05:32.817 03:56:47 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:05:32.817 03:56:47 -- nvmf/common.sh@58 -- # uname 00:05:32.817 03:56:47 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:32.817 03:56:47 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:32.817 03:56:47 -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:32.817 03:56:47 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:32.817 03:56:47 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:32.817 03:56:47 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:32.817 03:56:47 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:33.077 03:56:47 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:33.077 03:56:47 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:05:33.077 03:56:47 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:33.077 03:56:47 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:33.077 03:56:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:33.077 03:56:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:33.077 03:56:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:33.077 03:56:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:33.077 03:56:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:33.077 03:56:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:33.077 03:56:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:33.077 03:56:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:33.077 03:56:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:33.077 03:56:47 -- nvmf/common.sh@105 -- # continue 2 00:05:33.077 03:56:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:33.077 03:56:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:33.077 03:56:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:33.077 03:56:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:33.077 03:56:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:33.077 03:56:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:33.077 03:56:47 -- nvmf/common.sh@105 -- # continue 2 00:05:33.077 03:56:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:33.077 03:56:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:33.077 03:56:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:33.077 03:56:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:33.077 03:56:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:33.077 03:56:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:33.077 03:56:47 -- nvmf/common.sh@74 -- # ip= 00:05:33.077 03:56:47 -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:05:33.077 03:56:47 -- nvmf/common.sh@76 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:05:33.077 03:56:47 -- nvmf/common.sh@77 -- # ip link set mlx_0_0 up 00:05:33.077 03:56:47 -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:05:33.077 03:56:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:33.077 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:33.077 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:05:33.077 altname enp24s0f0np0 00:05:33.077 altname ens785f0np0 00:05:33.077 inet 192.168.100.8/24 scope global mlx_0_0 00:05:33.077 valid_lft forever preferred_lft forever 00:05:33.077 03:56:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:33.077 03:56:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:33.077 03:56:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:33.077 03:56:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:33.077 03:56:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:33.077 03:56:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:33.077 03:56:47 -- nvmf/common.sh@74 -- # ip= 00:05:33.077 03:56:47 -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:05:33.077 03:56:47 -- nvmf/common.sh@76 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:05:33.077 03:56:47 -- nvmf/common.sh@77 -- # ip link set mlx_0_1 up 00:05:33.077 03:56:47 -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:05:33.077 03:56:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:33.077 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:33.077 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:05:33.077 altname enp24s0f1np1 00:05:33.077 altname ens785f1np1 00:05:33.077 inet 192.168.100.9/24 scope global mlx_0_1 00:05:33.077 valid_lft forever preferred_lft forever 00:05:33.077 03:56:47 -- nvmf/common.sh@411 -- # return 0 00:05:33.077 03:56:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:33.077 03:56:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:33.077 03:56:47 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:05:33.077 03:56:47 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:05:33.077 03:56:47 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:33.077 03:56:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:33.077 03:56:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:33.077 03:56:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:33.077 03:56:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:33.077 03:56:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:33.077 03:56:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:33.077 03:56:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:33.077 03:56:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:33.077 03:56:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:33.077 03:56:47 -- nvmf/common.sh@105 -- # continue 2 00:05:33.077 03:56:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:33.077 03:56:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:33.077 03:56:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:33.078 03:56:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:33.078 03:56:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:33.078 03:56:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:33.078 03:56:47 -- nvmf/common.sh@105 -- # continue 2 00:05:33.078 03:56:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:33.078 03:56:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:33.078 03:56:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:33.078 03:56:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:33.078 03:56:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:33.078 03:56:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:33.078 03:56:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:33.078 03:56:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:33.078 03:56:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:33.078 03:56:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:33.078 03:56:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:33.078 03:56:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:33.078 03:56:47 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:05:33.078 192.168.100.9' 00:05:33.078 03:56:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:33.078 192.168.100.9' 00:05:33.078 03:56:47 -- nvmf/common.sh@446 -- # head -n 1 00:05:33.078 03:56:47 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:33.078 03:56:47 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:05:33.078 192.168.100.9' 00:05:33.078 03:56:47 -- nvmf/common.sh@447 -- # tail -n +2 00:05:33.078 03:56:47 -- nvmf/common.sh@447 -- # head -n 1 00:05:33.078 03:56:47 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:33.078 03:56:47 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:05:33.078 03:56:47 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:33.078 03:56:47 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:05:33.078 03:56:47 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:05:33.078 03:56:47 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:05:33.078 03:56:47 -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:33.078 03:56:47 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.078 03:56:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.337 MallocForNvmf0 00:05:33.337 03:56:47 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.337 03:56:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.337 MallocForNvmf1 00:05:33.596 03:56:47 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:33.596 03:56:47 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:33.596 [2024-04-19 03:56:48.010051] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:33.596 [2024-04-19 03:56:48.052600] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18647f0/0x19b1700) succeed. 00:05:33.596 [2024-04-19 03:56:48.062849] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18669e0/0x1911640) succeed. 00:05:33.596 03:56:48 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.596 03:56:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.855 03:56:48 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.855 03:56:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.114 03:56:48 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.114 03:56:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.114 03:56:48 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:34.114 03:56:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:34.373 [2024-04-19 03:56:48.690322] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:34.373 03:56:48 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:34.373 03:56:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:34.373 03:56:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.373 03:56:48 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:34.373 03:56:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:34.373 03:56:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.373 03:56:48 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:34.373 03:56:48 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.373 03:56:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.633 MallocBdevForConfigChangeCheck 00:05:34.633 03:56:48 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:34.633 03:56:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:34.633 03:56:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.633 03:56:48 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:34.633 03:56:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.892 03:56:49 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:34.892 INFO: shutting down applications... 00:05:34.892 03:56:49 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:34.892 03:56:49 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:34.892 03:56:49 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:34.892 03:56:49 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:39.088 Calling clear_iscsi_subsystem 00:05:39.088 Calling clear_nvmf_subsystem 00:05:39.088 Calling clear_nbd_subsystem 00:05:39.088 Calling clear_ublk_subsystem 00:05:39.088 Calling clear_vhost_blk_subsystem 00:05:39.088 Calling clear_vhost_scsi_subsystem 00:05:39.088 Calling clear_bdev_subsystem 00:05:39.088 03:56:53 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:39.088 03:56:53 -- json_config/json_config.sh@343 -- # count=100 00:05:39.088 03:56:53 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:39.088 03:56:53 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.089 03:56:53 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:39.089 03:56:53 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:39.089 03:56:53 -- json_config/json_config.sh@345 -- # break 00:05:39.089 03:56:53 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:39.089 03:56:53 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:39.089 03:56:53 -- json_config/common.sh@31 -- # local app=target 00:05:39.089 03:56:53 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.089 03:56:53 -- json_config/common.sh@35 -- # [[ -n 126394 ]] 00:05:39.089 03:56:53 -- json_config/common.sh@38 -- # kill -SIGINT 126394 00:05:39.089 03:56:53 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.089 03:56:53 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.089 03:56:53 -- json_config/common.sh@41 -- # kill -0 126394 00:05:39.089 03:56:53 -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.659 03:56:53 -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.659 03:56:53 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.659 03:56:53 -- json_config/common.sh@41 -- # kill -0 126394 00:05:39.659 03:56:53 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.659 03:56:53 -- json_config/common.sh@43 -- # break 00:05:39.659 03:56:53 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.659 03:56:53 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.659 SPDK target shutdown done 00:05:39.659 03:56:53 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:39.659 INFO: relaunching applications... 00:05:39.659 03:56:53 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.659 03:56:53 -- json_config/common.sh@9 -- # local app=target 00:05:39.659 03:56:53 -- json_config/common.sh@10 -- # shift 00:05:39.659 03:56:53 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.659 03:56:53 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.659 03:56:53 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.659 03:56:53 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.659 03:56:53 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.659 03:56:53 -- json_config/common.sh@22 -- # app_pid["$app"]=131562 00:05:39.659 03:56:53 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.659 Waiting for target to run... 00:05:39.659 03:56:53 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.659 03:56:53 -- json_config/common.sh@25 -- # waitforlisten 131562 /var/tmp/spdk_tgt.sock 00:05:39.659 03:56:53 -- common/autotest_common.sh@817 -- # '[' -z 131562 ']' 00:05:39.659 03:56:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.659 03:56:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.659 03:56:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.659 03:56:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.659 03:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.659 [2024-04-19 03:56:54.004224] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:39.659 [2024-04-19 03:56:54.004289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131562 ] 00:05:39.659 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.919 [2024-04-19 03:56:54.422747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.179 [2024-04-19 03:56:54.504913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.470 [2024-04-19 03:56:57.523776] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x151d780/0x1549a80) succeed. 00:05:43.470 [2024-04-19 03:56:57.532927] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x151f970/0x15a9a80) succeed. 00:05:43.470 [2024-04-19 03:56:57.580164] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:43.730 03:56:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:43.730 03:56:58 -- common/autotest_common.sh@850 -- # return 0 00:05:43.730 03:56:58 -- json_config/common.sh@26 -- # echo '' 00:05:43.730 00:05:43.730 03:56:58 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:43.730 03:56:58 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:43.730 INFO: Checking if target configuration is the same... 00:05:43.730 03:56:58 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.730 03:56:58 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:43.730 03:56:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.730 + '[' 2 -ne 2 ']' 00:05:43.730 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:43.730 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:43.730 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:43.730 +++ basename /dev/fd/62 00:05:43.730 ++ mktemp /tmp/62.XXX 00:05:43.730 + tmp_file_1=/tmp/62.1C4 00:05:43.730 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.730 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:43.730 + tmp_file_2=/tmp/spdk_tgt_config.json.JT8 00:05:43.730 + ret=0 00:05:43.730 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.989 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.989 + diff -u /tmp/62.1C4 /tmp/spdk_tgt_config.json.JT8 00:05:43.989 + echo 'INFO: JSON config files are the same' 00:05:43.989 INFO: JSON config files are the same 00:05:43.989 + rm /tmp/62.1C4 /tmp/spdk_tgt_config.json.JT8 00:05:43.989 + exit 0 00:05:43.989 03:56:58 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:43.989 03:56:58 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:43.989 INFO: changing configuration and checking if this can be detected... 00:05:43.989 03:56:58 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:43.989 03:56:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:44.248 03:56:58 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:44.248 03:56:58 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.248 03:56:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.248 + '[' 2 -ne 2 ']' 00:05:44.248 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:44.248 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:44.248 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:44.248 +++ basename /dev/fd/62 00:05:44.248 ++ mktemp /tmp/62.XXX 00:05:44.248 + tmp_file_1=/tmp/62.ewA 00:05:44.248 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.249 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:44.249 + tmp_file_2=/tmp/spdk_tgt_config.json.380 00:05:44.249 + ret=0 00:05:44.249 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:44.508 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:44.508 + diff -u /tmp/62.ewA /tmp/spdk_tgt_config.json.380 00:05:44.508 + ret=1 00:05:44.508 + echo '=== Start of file: /tmp/62.ewA ===' 00:05:44.508 + cat /tmp/62.ewA 00:05:44.508 + echo '=== End of file: /tmp/62.ewA ===' 00:05:44.508 + echo '' 00:05:44.508 + echo '=== Start of file: /tmp/spdk_tgt_config.json.380 ===' 00:05:44.508 + cat /tmp/spdk_tgt_config.json.380 00:05:44.508 + echo '=== End of file: /tmp/spdk_tgt_config.json.380 ===' 00:05:44.508 + echo '' 00:05:44.508 + rm /tmp/62.ewA /tmp/spdk_tgt_config.json.380 00:05:44.508 + exit 1 00:05:44.508 03:56:58 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:44.508 INFO: configuration change detected. 00:05:44.508 03:56:58 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:44.508 03:56:58 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:44.508 03:56:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:44.508 03:56:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.508 03:56:58 -- json_config/json_config.sh@307 -- # local ret=0 00:05:44.508 03:56:58 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:44.508 03:56:58 -- json_config/json_config.sh@317 -- # [[ -n 131562 ]] 00:05:44.508 03:56:58 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:44.508 03:56:58 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:44.508 03:56:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:44.508 03:56:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.508 03:56:58 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:44.508 03:56:58 -- json_config/json_config.sh@193 -- # uname -s 00:05:44.508 03:56:58 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:44.508 03:56:58 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:44.508 03:56:58 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:44.508 03:56:58 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:44.508 03:56:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:44.508 03:56:58 -- common/autotest_common.sh@10 -- # set +x 00:05:44.508 03:56:59 -- json_config/json_config.sh@323 -- # killprocess 131562 00:05:44.508 03:56:59 -- common/autotest_common.sh@936 -- # '[' -z 131562 ']' 00:05:44.508 03:56:59 -- common/autotest_common.sh@940 -- # kill -0 131562 00:05:44.508 03:56:59 -- common/autotest_common.sh@941 -- # uname 00:05:44.508 03:56:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.508 03:56:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131562 00:05:44.768 03:56:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.768 03:56:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.768 03:56:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131562' 00:05:44.768 killing process with pid 131562 00:05:44.768 03:56:59 -- common/autotest_common.sh@955 -- # kill 131562 00:05:44.768 03:56:59 -- common/autotest_common.sh@960 -- # wait 131562 00:05:48.976 03:57:02 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.976 03:57:02 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:48.976 03:57:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:48.976 03:57:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.976 03:57:02 -- json_config/json_config.sh@328 -- # return 0 00:05:48.976 03:57:02 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:48.976 INFO: Success 00:05:48.976 03:57:02 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:48.976 03:57:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:05:48.976 03:57:02 -- nvmf/common.sh@117 -- # sync 00:05:48.976 03:57:02 -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:05:48.976 03:57:02 -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:05:48.976 03:57:02 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:05:48.976 03:57:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:05:48.976 03:57:02 -- nvmf/common.sh@484 -- # [[ '' == \t\c\p ]] 00:05:48.976 00:05:48.976 real 0m25.312s 00:05:48.976 user 0m27.328s 00:05:48.976 sys 0m6.234s 00:05:48.976 03:57:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.976 03:57:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.976 ************************************ 00:05:48.976 END TEST json_config 00:05:48.976 ************************************ 00:05:48.976 03:57:02 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.976 03:57:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.976 03:57:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.976 03:57:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.976 ************************************ 00:05:48.976 START TEST json_config_extra_key 00:05:48.976 ************************************ 00:05:48.976 03:57:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.976 03:57:03 -- nvmf/common.sh@7 -- # uname -s 00:05:48.976 03:57:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.976 03:57:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.976 03:57:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.976 03:57:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.976 03:57:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.976 03:57:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.976 03:57:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.976 03:57:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.976 03:57:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.976 03:57:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.976 03:57:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:05:48.976 03:57:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:05:48.976 03:57:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.976 03:57:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.976 03:57:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.976 03:57:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.976 03:57:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:48.976 03:57:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.976 03:57:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.976 03:57:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.976 03:57:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.976 03:57:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.976 03:57:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.976 03:57:03 -- paths/export.sh@5 -- # export PATH 00:05:48.976 03:57:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.976 03:57:03 -- nvmf/common.sh@47 -- # : 0 00:05:48.976 03:57:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:48.976 03:57:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:48.976 03:57:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.976 03:57:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.976 03:57:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.976 03:57:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:48.976 03:57:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:48.976 03:57:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:48.976 INFO: launching applications... 00:05:48.976 03:57:03 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.976 03:57:03 -- json_config/common.sh@9 -- # local app=target 00:05:48.976 03:57:03 -- json_config/common.sh@10 -- # shift 00:05:48.976 03:57:03 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:48.976 03:57:03 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:48.976 03:57:03 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:48.976 03:57:03 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.976 03:57:03 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.976 03:57:03 -- json_config/common.sh@22 -- # app_pid["$app"]=133563 00:05:48.976 03:57:03 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:48.976 Waiting for target to run... 00:05:48.976 03:57:03 -- json_config/common.sh@25 -- # waitforlisten 133563 /var/tmp/spdk_tgt.sock 00:05:48.977 03:57:03 -- common/autotest_common.sh@817 -- # '[' -z 133563 ']' 00:05:48.977 03:57:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.977 03:57:03 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.977 03:57:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:48.977 03:57:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.977 03:57:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:48.977 03:57:03 -- common/autotest_common.sh@10 -- # set +x 00:05:48.977 [2024-04-19 03:57:03.260913] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:48.977 [2024-04-19 03:57:03.260963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133563 ] 00:05:48.977 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.236 [2024-04-19 03:57:03.686296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.496 [2024-04-19 03:57:03.768689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.755 03:57:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:49.755 03:57:04 -- common/autotest_common.sh@850 -- # return 0 00:05:49.755 03:57:04 -- json_config/common.sh@26 -- # echo '' 00:05:49.755 00:05:49.755 03:57:04 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:49.755 INFO: shutting down applications... 00:05:49.755 03:57:04 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:49.755 03:57:04 -- json_config/common.sh@31 -- # local app=target 00:05:49.755 03:57:04 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:49.755 03:57:04 -- json_config/common.sh@35 -- # [[ -n 133563 ]] 00:05:49.755 03:57:04 -- json_config/common.sh@38 -- # kill -SIGINT 133563 00:05:49.755 03:57:04 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:49.755 03:57:04 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.755 03:57:04 -- json_config/common.sh@41 -- # kill -0 133563 00:05:49.755 03:57:04 -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.015 03:57:04 -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.015 03:57:04 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.015 03:57:04 -- json_config/common.sh@41 -- # kill -0 133563 00:05:50.015 03:57:04 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.015 03:57:04 -- json_config/common.sh@43 -- # break 00:05:50.015 03:57:04 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.015 03:57:04 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.015 SPDK target shutdown done 00:05:50.015 03:57:04 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:50.015 Success 00:05:50.015 00:05:50.015 real 0m1.426s 00:05:50.015 user 0m0.892s 00:05:50.015 sys 0m0.522s 00:05:50.015 03:57:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.015 03:57:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.015 ************************************ 00:05:50.015 END TEST json_config_extra_key 00:05:50.016 ************************************ 00:05:50.276 03:57:04 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.276 03:57:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.276 03:57:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.276 03:57:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.276 ************************************ 00:05:50.276 START TEST alias_rpc 00:05:50.276 ************************************ 00:05:50.276 03:57:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.276 * Looking for test storage... 00:05:50.276 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:50.276 03:57:04 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.276 03:57:04 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=133965 00:05:50.276 03:57:04 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 133965 00:05:50.276 03:57:04 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.276 03:57:04 -- common/autotest_common.sh@817 -- # '[' -z 133965 ']' 00:05:50.276 03:57:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.276 03:57:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.276 03:57:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.276 03:57:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.276 03:57:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.536 [2024-04-19 03:57:04.845375] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:50.536 [2024-04-19 03:57:04.845434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133965 ] 00:05:50.536 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.536 [2024-04-19 03:57:04.895328] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.536 [2024-04-19 03:57:04.966798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.106 03:57:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:51.106 03:57:05 -- common/autotest_common.sh@850 -- # return 0 00:05:51.106 03:57:05 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:51.366 03:57:05 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 133965 00:05:51.366 03:57:05 -- common/autotest_common.sh@936 -- # '[' -z 133965 ']' 00:05:51.366 03:57:05 -- common/autotest_common.sh@940 -- # kill -0 133965 00:05:51.366 03:57:05 -- common/autotest_common.sh@941 -- # uname 00:05:51.366 03:57:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.366 03:57:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133965 00:05:51.366 03:57:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.366 03:57:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.366 03:57:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133965' 00:05:51.366 killing process with pid 133965 00:05:51.366 03:57:05 -- common/autotest_common.sh@955 -- # kill 133965 00:05:51.366 03:57:05 -- common/autotest_common.sh@960 -- # wait 133965 00:05:51.936 00:05:51.936 real 0m1.472s 00:05:51.936 user 0m1.579s 00:05:51.936 sys 0m0.392s 00:05:51.936 03:57:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.936 03:57:06 -- common/autotest_common.sh@10 -- # set +x 00:05:51.936 ************************************ 00:05:51.936 END TEST alias_rpc 00:05:51.936 ************************************ 00:05:51.936 03:57:06 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:51.936 03:57:06 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:51.936 03:57:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.936 03:57:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.936 03:57:06 -- common/autotest_common.sh@10 -- # set +x 00:05:51.936 ************************************ 00:05:51.936 START TEST spdkcli_tcp 00:05:51.936 ************************************ 00:05:51.936 03:57:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:51.936 * Looking for test storage... 00:05:51.936 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:51.936 03:57:06 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:51.936 03:57:06 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:51.936 03:57:06 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:51.936 03:57:06 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:51.936 03:57:06 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:51.936 03:57:06 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:51.936 03:57:06 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:51.936 03:57:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:51.936 03:57:06 -- common/autotest_common.sh@10 -- # set +x 00:05:51.936 03:57:06 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=134689 00:05:51.936 03:57:06 -- spdkcli/tcp.sh@27 -- # waitforlisten 134689 00:05:51.936 03:57:06 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:51.936 03:57:06 -- common/autotest_common.sh@817 -- # '[' -z 134689 ']' 00:05:51.936 03:57:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.936 03:57:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:51.936 03:57:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.936 03:57:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:51.936 03:57:06 -- common/autotest_common.sh@10 -- # set +x 00:05:52.197 [2024-04-19 03:57:06.489428] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:52.197 [2024-04-19 03:57:06.489495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134689 ] 00:05:52.197 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.197 [2024-04-19 03:57:06.553523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.197 [2024-04-19 03:57:06.622848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.197 [2024-04-19 03:57:06.622848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.765 03:57:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.765 03:57:07 -- common/autotest_common.sh@850 -- # return 0 00:05:52.765 03:57:07 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:52.765 03:57:07 -- spdkcli/tcp.sh@31 -- # socat_pid=134814 00:05:52.765 03:57:07 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:53.025 [ 00:05:53.025 "bdev_malloc_delete", 00:05:53.025 "bdev_malloc_create", 00:05:53.025 "bdev_null_resize", 00:05:53.025 "bdev_null_delete", 00:05:53.025 "bdev_null_create", 00:05:53.025 "bdev_nvme_cuse_unregister", 00:05:53.025 "bdev_nvme_cuse_register", 00:05:53.025 "bdev_opal_new_user", 00:05:53.025 "bdev_opal_set_lock_state", 00:05:53.025 "bdev_opal_delete", 00:05:53.025 "bdev_opal_get_info", 00:05:53.025 "bdev_opal_create", 00:05:53.025 "bdev_nvme_opal_revert", 00:05:53.025 "bdev_nvme_opal_init", 00:05:53.025 "bdev_nvme_send_cmd", 00:05:53.025 "bdev_nvme_get_path_iostat", 00:05:53.025 "bdev_nvme_get_mdns_discovery_info", 00:05:53.025 "bdev_nvme_stop_mdns_discovery", 00:05:53.025 "bdev_nvme_start_mdns_discovery", 00:05:53.025 "bdev_nvme_set_multipath_policy", 00:05:53.025 "bdev_nvme_set_preferred_path", 00:05:53.025 "bdev_nvme_get_io_paths", 00:05:53.025 "bdev_nvme_remove_error_injection", 00:05:53.025 "bdev_nvme_add_error_injection", 00:05:53.025 "bdev_nvme_get_discovery_info", 00:05:53.025 "bdev_nvme_stop_discovery", 00:05:53.025 "bdev_nvme_start_discovery", 00:05:53.025 "bdev_nvme_get_controller_health_info", 00:05:53.025 "bdev_nvme_disable_controller", 00:05:53.025 "bdev_nvme_enable_controller", 00:05:53.025 "bdev_nvme_reset_controller", 00:05:53.025 "bdev_nvme_get_transport_statistics", 00:05:53.025 "bdev_nvme_apply_firmware", 00:05:53.025 "bdev_nvme_detach_controller", 00:05:53.025 "bdev_nvme_get_controllers", 00:05:53.025 "bdev_nvme_attach_controller", 00:05:53.025 "bdev_nvme_set_hotplug", 00:05:53.026 "bdev_nvme_set_options", 00:05:53.026 "bdev_passthru_delete", 00:05:53.026 "bdev_passthru_create", 00:05:53.026 "bdev_lvol_grow_lvstore", 00:05:53.026 "bdev_lvol_get_lvols", 00:05:53.026 "bdev_lvol_get_lvstores", 00:05:53.026 "bdev_lvol_delete", 00:05:53.026 "bdev_lvol_set_read_only", 00:05:53.026 "bdev_lvol_resize", 00:05:53.026 "bdev_lvol_decouple_parent", 00:05:53.026 "bdev_lvol_inflate", 00:05:53.026 "bdev_lvol_rename", 00:05:53.026 "bdev_lvol_clone_bdev", 00:05:53.026 "bdev_lvol_clone", 00:05:53.026 "bdev_lvol_snapshot", 00:05:53.026 "bdev_lvol_create", 00:05:53.026 "bdev_lvol_delete_lvstore", 00:05:53.026 "bdev_lvol_rename_lvstore", 00:05:53.026 "bdev_lvol_create_lvstore", 00:05:53.026 "bdev_raid_set_options", 00:05:53.026 "bdev_raid_remove_base_bdev", 00:05:53.026 "bdev_raid_add_base_bdev", 00:05:53.026 "bdev_raid_delete", 00:05:53.026 "bdev_raid_create", 00:05:53.026 "bdev_raid_get_bdevs", 00:05:53.026 "bdev_error_inject_error", 00:05:53.026 "bdev_error_delete", 00:05:53.026 "bdev_error_create", 00:05:53.026 "bdev_split_delete", 00:05:53.026 "bdev_split_create", 00:05:53.026 "bdev_delay_delete", 00:05:53.026 "bdev_delay_create", 00:05:53.026 "bdev_delay_update_latency", 00:05:53.026 "bdev_zone_block_delete", 00:05:53.026 "bdev_zone_block_create", 00:05:53.026 "blobfs_create", 00:05:53.026 "blobfs_detect", 00:05:53.026 "blobfs_set_cache_size", 00:05:53.026 "bdev_aio_delete", 00:05:53.026 "bdev_aio_rescan", 00:05:53.026 "bdev_aio_create", 00:05:53.026 "bdev_ftl_set_property", 00:05:53.026 "bdev_ftl_get_properties", 00:05:53.026 "bdev_ftl_get_stats", 00:05:53.026 "bdev_ftl_unmap", 00:05:53.026 "bdev_ftl_unload", 00:05:53.026 "bdev_ftl_delete", 00:05:53.026 "bdev_ftl_load", 00:05:53.026 "bdev_ftl_create", 00:05:53.026 "bdev_virtio_attach_controller", 00:05:53.026 "bdev_virtio_scsi_get_devices", 00:05:53.026 "bdev_virtio_detach_controller", 00:05:53.026 "bdev_virtio_blk_set_hotplug", 00:05:53.026 "bdev_iscsi_delete", 00:05:53.026 "bdev_iscsi_create", 00:05:53.026 "bdev_iscsi_set_options", 00:05:53.026 "accel_error_inject_error", 00:05:53.026 "ioat_scan_accel_module", 00:05:53.026 "dsa_scan_accel_module", 00:05:53.026 "iaa_scan_accel_module", 00:05:53.026 "keyring_file_remove_key", 00:05:53.026 "keyring_file_add_key", 00:05:53.026 "iscsi_set_options", 00:05:53.026 "iscsi_get_auth_groups", 00:05:53.026 "iscsi_auth_group_remove_secret", 00:05:53.026 "iscsi_auth_group_add_secret", 00:05:53.026 "iscsi_delete_auth_group", 00:05:53.026 "iscsi_create_auth_group", 00:05:53.026 "iscsi_set_discovery_auth", 00:05:53.026 "iscsi_get_options", 00:05:53.026 "iscsi_target_node_request_logout", 00:05:53.026 "iscsi_target_node_set_redirect", 00:05:53.026 "iscsi_target_node_set_auth", 00:05:53.026 "iscsi_target_node_add_lun", 00:05:53.026 "iscsi_get_stats", 00:05:53.026 "iscsi_get_connections", 00:05:53.026 "iscsi_portal_group_set_auth", 00:05:53.026 "iscsi_start_portal_group", 00:05:53.026 "iscsi_delete_portal_group", 00:05:53.026 "iscsi_create_portal_group", 00:05:53.026 "iscsi_get_portal_groups", 00:05:53.026 "iscsi_delete_target_node", 00:05:53.026 "iscsi_target_node_remove_pg_ig_maps", 00:05:53.026 "iscsi_target_node_add_pg_ig_maps", 00:05:53.026 "iscsi_create_target_node", 00:05:53.026 "iscsi_get_target_nodes", 00:05:53.026 "iscsi_delete_initiator_group", 00:05:53.026 "iscsi_initiator_group_remove_initiators", 00:05:53.026 "iscsi_initiator_group_add_initiators", 00:05:53.026 "iscsi_create_initiator_group", 00:05:53.026 "iscsi_get_initiator_groups", 00:05:53.026 "nvmf_set_crdt", 00:05:53.026 "nvmf_set_config", 00:05:53.026 "nvmf_set_max_subsystems", 00:05:53.026 "nvmf_subsystem_get_listeners", 00:05:53.026 "nvmf_subsystem_get_qpairs", 00:05:53.026 "nvmf_subsystem_get_controllers", 00:05:53.026 "nvmf_get_stats", 00:05:53.026 "nvmf_get_transports", 00:05:53.026 "nvmf_create_transport", 00:05:53.026 "nvmf_get_targets", 00:05:53.026 "nvmf_delete_target", 00:05:53.026 "nvmf_create_target", 00:05:53.026 "nvmf_subsystem_allow_any_host", 00:05:53.026 "nvmf_subsystem_remove_host", 00:05:53.026 "nvmf_subsystem_add_host", 00:05:53.026 "nvmf_ns_remove_host", 00:05:53.026 "nvmf_ns_add_host", 00:05:53.026 "nvmf_subsystem_remove_ns", 00:05:53.026 "nvmf_subsystem_add_ns", 00:05:53.026 "nvmf_subsystem_listener_set_ana_state", 00:05:53.026 "nvmf_discovery_get_referrals", 00:05:53.026 "nvmf_discovery_remove_referral", 00:05:53.026 "nvmf_discovery_add_referral", 00:05:53.026 "nvmf_subsystem_remove_listener", 00:05:53.026 "nvmf_subsystem_add_listener", 00:05:53.026 "nvmf_delete_subsystem", 00:05:53.026 "nvmf_create_subsystem", 00:05:53.026 "nvmf_get_subsystems", 00:05:53.026 "env_dpdk_get_mem_stats", 00:05:53.026 "nbd_get_disks", 00:05:53.026 "nbd_stop_disk", 00:05:53.026 "nbd_start_disk", 00:05:53.026 "ublk_recover_disk", 00:05:53.026 "ublk_get_disks", 00:05:53.026 "ublk_stop_disk", 00:05:53.026 "ublk_start_disk", 00:05:53.026 "ublk_destroy_target", 00:05:53.026 "ublk_create_target", 00:05:53.026 "virtio_blk_create_transport", 00:05:53.026 "virtio_blk_get_transports", 00:05:53.026 "vhost_controller_set_coalescing", 00:05:53.026 "vhost_get_controllers", 00:05:53.026 "vhost_delete_controller", 00:05:53.026 "vhost_create_blk_controller", 00:05:53.026 "vhost_scsi_controller_remove_target", 00:05:53.026 "vhost_scsi_controller_add_target", 00:05:53.026 "vhost_start_scsi_controller", 00:05:53.026 "vhost_create_scsi_controller", 00:05:53.026 "thread_set_cpumask", 00:05:53.026 "framework_get_scheduler", 00:05:53.026 "framework_set_scheduler", 00:05:53.026 "framework_get_reactors", 00:05:53.026 "thread_get_io_channels", 00:05:53.026 "thread_get_pollers", 00:05:53.026 "thread_get_stats", 00:05:53.026 "framework_monitor_context_switch", 00:05:53.026 "spdk_kill_instance", 00:05:53.026 "log_enable_timestamps", 00:05:53.026 "log_get_flags", 00:05:53.026 "log_clear_flag", 00:05:53.026 "log_set_flag", 00:05:53.026 "log_get_level", 00:05:53.026 "log_set_level", 00:05:53.026 "log_get_print_level", 00:05:53.026 "log_set_print_level", 00:05:53.026 "framework_enable_cpumask_locks", 00:05:53.026 "framework_disable_cpumask_locks", 00:05:53.026 "framework_wait_init", 00:05:53.026 "framework_start_init", 00:05:53.026 "scsi_get_devices", 00:05:53.026 "bdev_get_histogram", 00:05:53.026 "bdev_enable_histogram", 00:05:53.026 "bdev_set_qos_limit", 00:05:53.026 "bdev_set_qd_sampling_period", 00:05:53.026 "bdev_get_bdevs", 00:05:53.026 "bdev_reset_iostat", 00:05:53.026 "bdev_get_iostat", 00:05:53.026 "bdev_examine", 00:05:53.026 "bdev_wait_for_examine", 00:05:53.026 "bdev_set_options", 00:05:53.026 "notify_get_notifications", 00:05:53.026 "notify_get_types", 00:05:53.026 "accel_get_stats", 00:05:53.026 "accel_set_options", 00:05:53.026 "accel_set_driver", 00:05:53.026 "accel_crypto_key_destroy", 00:05:53.026 "accel_crypto_keys_get", 00:05:53.026 "accel_crypto_key_create", 00:05:53.026 "accel_assign_opc", 00:05:53.026 "accel_get_module_info", 00:05:53.026 "accel_get_opc_assignments", 00:05:53.026 "vmd_rescan", 00:05:53.026 "vmd_remove_device", 00:05:53.026 "vmd_enable", 00:05:53.026 "sock_set_default_impl", 00:05:53.026 "sock_impl_set_options", 00:05:53.026 "sock_impl_get_options", 00:05:53.026 "iobuf_get_stats", 00:05:53.026 "iobuf_set_options", 00:05:53.026 "framework_get_pci_devices", 00:05:53.026 "framework_get_config", 00:05:53.026 "framework_get_subsystems", 00:05:53.026 "trace_get_info", 00:05:53.026 "trace_get_tpoint_group_mask", 00:05:53.026 "trace_disable_tpoint_group", 00:05:53.026 "trace_enable_tpoint_group", 00:05:53.026 "trace_clear_tpoint_mask", 00:05:53.026 "trace_set_tpoint_mask", 00:05:53.026 "keyring_get_keys", 00:05:53.026 "spdk_get_version", 00:05:53.026 "rpc_get_methods" 00:05:53.026 ] 00:05:53.026 03:57:07 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:53.026 03:57:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:53.026 03:57:07 -- common/autotest_common.sh@10 -- # set +x 00:05:53.026 03:57:07 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:53.026 03:57:07 -- spdkcli/tcp.sh@38 -- # killprocess 134689 00:05:53.026 03:57:07 -- common/autotest_common.sh@936 -- # '[' -z 134689 ']' 00:05:53.026 03:57:07 -- common/autotest_common.sh@940 -- # kill -0 134689 00:05:53.026 03:57:07 -- common/autotest_common.sh@941 -- # uname 00:05:53.026 03:57:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.026 03:57:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134689 00:05:53.026 03:57:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.026 03:57:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.026 03:57:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134689' 00:05:53.026 killing process with pid 134689 00:05:53.026 03:57:07 -- common/autotest_common.sh@955 -- # kill 134689 00:05:53.026 03:57:07 -- common/autotest_common.sh@960 -- # wait 134689 00:05:53.596 00:05:53.596 real 0m1.490s 00:05:53.596 user 0m2.704s 00:05:53.597 sys 0m0.452s 00:05:53.597 03:57:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.597 03:57:07 -- common/autotest_common.sh@10 -- # set +x 00:05:53.597 ************************************ 00:05:53.597 END TEST spdkcli_tcp 00:05:53.597 ************************************ 00:05:53.597 03:57:07 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.597 03:57:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.597 03:57:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.597 03:57:07 -- common/autotest_common.sh@10 -- # set +x 00:05:53.597 ************************************ 00:05:53.597 START TEST dpdk_mem_utility 00:05:53.597 ************************************ 00:05:53.597 03:57:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.597 * Looking for test storage... 00:05:53.597 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:53.597 03:57:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:53.597 03:57:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=135135 00:05:53.597 03:57:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 135135 00:05:53.597 03:57:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.597 03:57:08 -- common/autotest_common.sh@817 -- # '[' -z 135135 ']' 00:05:53.597 03:57:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.597 03:57:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:53.597 03:57:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.597 03:57:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:53.597 03:57:08 -- common/autotest_common.sh@10 -- # set +x 00:05:53.597 [2024-04-19 03:57:08.120665] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:53.597 [2024-04-19 03:57:08.120717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135135 ] 00:05:53.857 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.857 [2024-04-19 03:57:08.186478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.857 [2024-04-19 03:57:08.253124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.427 03:57:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.427 03:57:08 -- common/autotest_common.sh@850 -- # return 0 00:05:54.427 03:57:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:54.427 03:57:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:54.427 03:57:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.428 03:57:08 -- common/autotest_common.sh@10 -- # set +x 00:05:54.428 { 00:05:54.428 "filename": "/tmp/spdk_mem_dump.txt" 00:05:54.428 } 00:05:54.428 03:57:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.428 03:57:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:54.689 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:54.689 1 heaps totaling size 814.000000 MiB 00:05:54.689 size: 814.000000 MiB heap id: 0 00:05:54.689 end heaps---------- 00:05:54.689 8 mempools totaling size 598.116089 MiB 00:05:54.689 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:54.689 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:54.689 size: 84.521057 MiB name: bdev_io_135135 00:05:54.689 size: 51.011292 MiB name: evtpool_135135 00:05:54.689 size: 50.003479 MiB name: msgpool_135135 00:05:54.689 size: 21.763794 MiB name: PDU_Pool 00:05:54.689 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:54.689 size: 0.026123 MiB name: Session_Pool 00:05:54.689 end mempools------- 00:05:54.689 6 memzones totaling size 4.142822 MiB 00:05:54.689 size: 1.000366 MiB name: RG_ring_0_135135 00:05:54.689 size: 1.000366 MiB name: RG_ring_1_135135 00:05:54.689 size: 1.000366 MiB name: RG_ring_4_135135 00:05:54.689 size: 1.000366 MiB name: RG_ring_5_135135 00:05:54.689 size: 0.125366 MiB name: RG_ring_2_135135 00:05:54.689 size: 0.015991 MiB name: RG_ring_3_135135 00:05:54.689 end memzones------- 00:05:54.689 03:57:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:54.689 heap id: 0 total size: 814.000000 MiB number of busy elements: 42 number of free elements: 15 00:05:54.689 list of free elements. size: 12.517212 MiB 00:05:54.689 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:54.689 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:54.689 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:54.689 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:54.689 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:54.689 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:54.689 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:54.689 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:54.689 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:54.689 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:54.689 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:54.689 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:54.689 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:54.689 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:54.689 element at address: 0x200003a00000 with size: 0.353394 MiB 00:05:54.689 list of standard malloc elements. size: 199.220215 MiB 00:05:54.689 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:54.689 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:54.689 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:54.689 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:54.689 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:54.689 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:54.689 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:54.689 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:54.689 element at address: 0x200003aff280 with size: 0.002136 MiB 00:05:54.689 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:54.689 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:54.689 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:54.689 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200003a5a780 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200003adaa40 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200003adac40 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200003adef00 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200003aff1c0 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:54.689 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:54.689 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:54.689 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:54.689 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:54.689 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:54.689 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:54.689 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:54.689 list of memzone associated elements. size: 602.262573 MiB 00:05:54.689 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:54.689 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:54.689 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:54.689 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:54.689 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:54.689 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_135135_0 00:05:54.689 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:54.689 associated memzone info: size: 48.002930 MiB name: MP_evtpool_135135_0 00:05:54.689 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:54.689 associated memzone info: size: 48.002930 MiB name: MP_msgpool_135135_0 00:05:54.689 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:54.689 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:54.689 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:54.689 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:54.689 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:54.689 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_135135 00:05:54.689 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:54.689 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_135135 00:05:54.689 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:54.689 associated memzone info: size: 1.007996 MiB name: MP_evtpool_135135 00:05:54.689 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:54.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:54.689 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:54.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:54.689 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:54.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:54.689 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:54.689 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:54.689 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:54.689 associated memzone info: size: 1.000366 MiB name: RG_ring_0_135135 00:05:54.689 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:54.689 associated memzone info: size: 1.000366 MiB name: RG_ring_1_135135 00:05:54.689 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:54.689 associated memzone info: size: 1.000366 MiB name: RG_ring_4_135135 00:05:54.689 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:54.689 associated memzone info: size: 1.000366 MiB name: RG_ring_5_135135 00:05:54.689 element at address: 0x200003a5a840 with size: 0.500488 MiB 00:05:54.689 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_135135 00:05:54.689 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:54.689 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:54.689 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:54.689 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:54.689 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:54.689 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:54.689 element at address: 0x200003adefc0 with size: 0.125488 MiB 00:05:54.689 associated memzone info: size: 0.125366 MiB name: RG_ring_2_135135 00:05:54.689 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:54.689 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:54.689 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:54.689 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:54.689 element at address: 0x200003adad00 with size: 0.016113 MiB 00:05:54.689 associated memzone info: size: 0.015991 MiB name: RG_ring_3_135135 00:05:54.689 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:54.689 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:54.689 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:54.689 associated memzone info: size: 0.000183 MiB name: MP_msgpool_135135 00:05:54.689 element at address: 0x200003adab00 with size: 0.000305 MiB 00:05:54.689 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_135135 00:05:54.690 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:54.690 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:54.690 03:57:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:54.690 03:57:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 135135 00:05:54.690 03:57:08 -- common/autotest_common.sh@936 -- # '[' -z 135135 ']' 00:05:54.690 03:57:08 -- common/autotest_common.sh@940 -- # kill -0 135135 00:05:54.690 03:57:08 -- common/autotest_common.sh@941 -- # uname 00:05:54.690 03:57:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.690 03:57:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135135 00:05:54.690 03:57:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.690 03:57:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.690 03:57:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135135' 00:05:54.690 killing process with pid 135135 00:05:54.690 03:57:09 -- common/autotest_common.sh@955 -- # kill 135135 00:05:54.690 03:57:09 -- common/autotest_common.sh@960 -- # wait 135135 00:05:54.950 00:05:54.950 real 0m1.378s 00:05:54.950 user 0m1.428s 00:05:54.950 sys 0m0.385s 00:05:54.950 03:57:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.950 03:57:09 -- common/autotest_common.sh@10 -- # set +x 00:05:54.950 ************************************ 00:05:54.950 END TEST dpdk_mem_utility 00:05:54.950 ************************************ 00:05:54.950 03:57:09 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:54.950 03:57:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.950 03:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.950 03:57:09 -- common/autotest_common.sh@10 -- # set +x 00:05:55.209 ************************************ 00:05:55.209 START TEST event 00:05:55.209 ************************************ 00:05:55.209 03:57:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:55.209 * Looking for test storage... 00:05:55.209 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:55.209 03:57:09 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:55.209 03:57:09 -- bdev/nbd_common.sh@6 -- # set -e 00:05:55.209 03:57:09 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.209 03:57:09 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:55.209 03:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.209 03:57:09 -- common/autotest_common.sh@10 -- # set +x 00:05:55.469 ************************************ 00:05:55.469 START TEST event_perf 00:05:55.469 ************************************ 00:05:55.469 03:57:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.469 Running I/O for 1 seconds...[2024-04-19 03:57:09.774842] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:55.469 [2024-04-19 03:57:09.774906] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135470 ] 00:05:55.469 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.469 [2024-04-19 03:57:09.844717] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.469 [2024-04-19 03:57:09.914938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.469 [2024-04-19 03:57:09.915046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.469 [2024-04-19 03:57:09.915150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.469 Running I/O for 1 seconds...[2024-04-19 03:57:09.915151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.860 00:05:56.860 lcore 0: 210035 00:05:56.861 lcore 1: 210033 00:05:56.861 lcore 2: 210035 00:05:56.861 lcore 3: 210034 00:05:56.861 done. 00:05:56.861 00:05:56.861 real 0m1.248s 00:05:56.861 user 0m4.162s 00:05:56.861 sys 0m0.082s 00:05:56.861 03:57:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.861 03:57:11 -- common/autotest_common.sh@10 -- # set +x 00:05:56.861 ************************************ 00:05:56.861 END TEST event_perf 00:05:56.861 ************************************ 00:05:56.861 03:57:11 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.861 03:57:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:56.861 03:57:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.861 03:57:11 -- common/autotest_common.sh@10 -- # set +x 00:05:56.861 ************************************ 00:05:56.861 START TEST event_reactor 00:05:56.861 ************************************ 00:05:56.861 03:57:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.861 [2024-04-19 03:57:11.197483] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:56.861 [2024-04-19 03:57:11.197550] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135765 ] 00:05:56.861 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.861 [2024-04-19 03:57:11.270968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.861 [2024-04-19 03:57:11.347312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.245 test_start 00:05:58.245 oneshot 00:05:58.245 tick 100 00:05:58.245 tick 100 00:05:58.245 tick 250 00:05:58.245 tick 100 00:05:58.245 tick 100 00:05:58.245 tick 100 00:05:58.245 tick 250 00:05:58.245 tick 500 00:05:58.245 tick 100 00:05:58.245 tick 100 00:05:58.245 tick 250 00:05:58.245 tick 100 00:05:58.245 tick 100 00:05:58.245 test_end 00:05:58.245 00:05:58.245 real 0m1.255s 00:05:58.245 user 0m1.154s 00:05:58.245 sys 0m0.097s 00:05:58.245 03:57:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.245 03:57:12 -- common/autotest_common.sh@10 -- # set +x 00:05:58.245 ************************************ 00:05:58.245 END TEST event_reactor 00:05:58.245 ************************************ 00:05:58.245 03:57:12 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.245 03:57:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:58.245 03:57:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.245 03:57:12 -- common/autotest_common.sh@10 -- # set +x 00:05:58.245 ************************************ 00:05:58.245 START TEST event_reactor_perf 00:05:58.245 ************************************ 00:05:58.245 03:57:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.245 [2024-04-19 03:57:12.626059] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:58.245 [2024-04-19 03:57:12.626127] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136052 ] 00:05:58.245 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.245 [2024-04-19 03:57:12.699808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.504 [2024-04-19 03:57:12.775696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.452 test_start 00:05:59.452 test_end 00:05:59.452 Performance: 548766 events per second 00:05:59.452 00:05:59.452 real 0m1.254s 00:05:59.452 user 0m1.155s 00:05:59.452 sys 0m0.094s 00:05:59.452 03:57:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.452 03:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.452 ************************************ 00:05:59.452 END TEST event_reactor_perf 00:05:59.452 ************************************ 00:05:59.452 03:57:13 -- event/event.sh@49 -- # uname -s 00:05:59.452 03:57:13 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:59.452 03:57:13 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:59.452 03:57:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.452 03:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.452 03:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.712 ************************************ 00:05:59.712 START TEST event_scheduler 00:05:59.712 ************************************ 00:05:59.712 03:57:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:59.712 * Looking for test storage... 00:05:59.712 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:59.712 03:57:14 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:59.712 03:57:14 -- scheduler/scheduler.sh@35 -- # scheduler_pid=136369 00:05:59.712 03:57:14 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.712 03:57:14 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:59.712 03:57:14 -- scheduler/scheduler.sh@37 -- # waitforlisten 136369 00:05:59.712 03:57:14 -- common/autotest_common.sh@817 -- # '[' -z 136369 ']' 00:05:59.712 03:57:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.712 03:57:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.712 03:57:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.712 03:57:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.712 03:57:14 -- common/autotest_common.sh@10 -- # set +x 00:05:59.712 [2024-04-19 03:57:14.170157] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:59.712 [2024-04-19 03:57:14.170207] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136369 ] 00:05:59.712 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.712 [2024-04-19 03:57:14.238352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.970 [2024-04-19 03:57:14.306693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.970 [2024-04-19 03:57:14.306805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.970 [2024-04-19 03:57:14.306908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.970 [2024-04-19 03:57:14.306909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.539 03:57:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.539 03:57:14 -- common/autotest_common.sh@850 -- # return 0 00:06:00.539 03:57:14 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:00.539 03:57:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.539 03:57:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.539 POWER: Env isn't set yet! 00:06:00.539 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:00.539 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.539 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.539 POWER: Attempting to initialise PSTAT power management... 00:06:00.539 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:00.539 POWER: Initialized successfully for lcore 0 power management 00:06:00.539 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:00.539 POWER: Initialized successfully for lcore 1 power management 00:06:00.539 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:00.539 POWER: Initialized successfully for lcore 2 power management 00:06:00.539 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:00.539 POWER: Initialized successfully for lcore 3 power management 00:06:00.539 03:57:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.539 03:57:14 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:00.539 03:57:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.539 03:57:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.539 [2024-04-19 03:57:15.054067] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:00.539 03:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.539 03:57:15 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:00.539 03:57:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.539 03:57:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.539 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 ************************************ 00:06:00.798 START TEST scheduler_create_thread 00:06:00.798 ************************************ 00:06:00.798 03:57:15 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:00.798 03:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.798 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 2 00:06:00.798 03:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:00.798 03:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.798 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 3 00:06:00.798 03:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:00.798 03:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.798 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 4 00:06:00.798 03:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:00.798 03:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.798 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 5 00:06:00.798 03:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:00.798 03:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.798 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 6 00:06:00.798 03:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:00.798 03:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.798 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 7 00:06:00.798 03:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:00.798 03:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.798 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 8 00:06:00.798 03:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:00.798 03:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.798 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 9 00:06:00.798 03:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:00.798 03:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.798 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 10 00:06:00.798 03:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:00.798 03:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.798 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 03:57:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:00.798 03:57:15 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:00.798 03:57:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.798 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:01.736 03:57:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.736 03:57:16 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:01.736 03:57:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.736 03:57:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.113 03:57:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:03.113 03:57:17 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:03.113 03:57:17 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:03.113 03:57:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:03.113 03:57:17 -- common/autotest_common.sh@10 -- # set +x 00:06:04.050 03:57:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.050 00:06:04.050 real 0m3.379s 00:06:04.050 user 0m0.023s 00:06:04.050 sys 0m0.004s 00:06:04.050 03:57:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.050 03:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:04.050 ************************************ 00:06:04.050 END TEST scheduler_create_thread 00:06:04.050 ************************************ 00:06:04.308 03:57:18 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:04.308 03:57:18 -- scheduler/scheduler.sh@46 -- # killprocess 136369 00:06:04.308 03:57:18 -- common/autotest_common.sh@936 -- # '[' -z 136369 ']' 00:06:04.308 03:57:18 -- common/autotest_common.sh@940 -- # kill -0 136369 00:06:04.308 03:57:18 -- common/autotest_common.sh@941 -- # uname 00:06:04.308 03:57:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.308 03:57:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136369 00:06:04.308 03:57:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:04.308 03:57:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:04.308 03:57:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136369' 00:06:04.308 killing process with pid 136369 00:06:04.308 03:57:18 -- common/autotest_common.sh@955 -- # kill 136369 00:06:04.308 03:57:18 -- common/autotest_common.sh@960 -- # wait 136369 00:06:04.568 [2024-04-19 03:57:18.926041] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:04.568 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:04.568 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:04.568 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:04.568 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:04.568 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:04.568 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:04.568 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:04.568 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:04.828 00:06:04.828 real 0m5.138s 00:06:04.828 user 0m10.500s 00:06:04.828 sys 0m0.419s 00:06:04.828 03:57:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.828 03:57:19 -- common/autotest_common.sh@10 -- # set +x 00:06:04.828 ************************************ 00:06:04.828 END TEST event_scheduler 00:06:04.828 ************************************ 00:06:04.828 03:57:19 -- event/event.sh@51 -- # modprobe -n nbd 00:06:04.828 03:57:19 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:04.828 03:57:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.828 03:57:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.828 03:57:19 -- common/autotest_common.sh@10 -- # set +x 00:06:04.828 ************************************ 00:06:04.828 START TEST app_repeat 00:06:04.828 ************************************ 00:06:04.828 03:57:19 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:04.828 03:57:19 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.828 03:57:19 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.828 03:57:19 -- event/event.sh@13 -- # local nbd_list 00:06:04.828 03:57:19 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.828 03:57:19 -- event/event.sh@14 -- # local bdev_list 00:06:04.828 03:57:19 -- event/event.sh@15 -- # local repeat_times=4 00:06:04.828 03:57:19 -- event/event.sh@17 -- # modprobe nbd 00:06:04.828 03:57:19 -- event/event.sh@19 -- # repeat_pid=137483 00:06:04.828 03:57:19 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.828 03:57:19 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:04.828 03:57:19 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 137483' 00:06:04.828 Process app_repeat pid: 137483 00:06:04.828 03:57:19 -- event/event.sh@23 -- # for i in {0..2} 00:06:04.828 03:57:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:04.828 spdk_app_start Round 0 00:06:04.828 03:57:19 -- event/event.sh@25 -- # waitforlisten 137483 /var/tmp/spdk-nbd.sock 00:06:04.828 03:57:19 -- common/autotest_common.sh@817 -- # '[' -z 137483 ']' 00:06:04.828 03:57:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.828 03:57:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:04.828 03:57:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.828 03:57:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:04.828 03:57:19 -- common/autotest_common.sh@10 -- # set +x 00:06:04.828 [2024-04-19 03:57:19.349939] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:04.828 [2024-04-19 03:57:19.349984] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137483 ] 00:06:05.087 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.088 [2024-04-19 03:57:19.399422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.088 [2024-04-19 03:57:19.466739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.088 [2024-04-19 03:57:19.466742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.088 03:57:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:05.088 03:57:19 -- common/autotest_common.sh@850 -- # return 0 00:06:05.088 03:57:19 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.347 Malloc0 00:06:05.347 03:57:19 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.347 Malloc1 00:06:05.607 03:57:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@12 -- # local i 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.607 03:57:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.607 /dev/nbd0 00:06:05.607 03:57:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.607 03:57:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.607 03:57:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:05.607 03:57:20 -- common/autotest_common.sh@855 -- # local i 00:06:05.607 03:57:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:05.607 03:57:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:05.607 03:57:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:05.607 03:57:20 -- common/autotest_common.sh@859 -- # break 00:06:05.607 03:57:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:05.607 03:57:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:05.607 03:57:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.607 1+0 records in 00:06:05.607 1+0 records out 00:06:05.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186413 s, 22.0 MB/s 00:06:05.607 03:57:20 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.607 03:57:20 -- common/autotest_common.sh@872 -- # size=4096 00:06:05.607 03:57:20 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.607 03:57:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:05.607 03:57:20 -- common/autotest_common.sh@875 -- # return 0 00:06:05.607 03:57:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.607 03:57:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.607 03:57:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.867 /dev/nbd1 00:06:05.867 03:57:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.867 03:57:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.867 03:57:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:05.867 03:57:20 -- common/autotest_common.sh@855 -- # local i 00:06:05.867 03:57:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:05.867 03:57:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:05.867 03:57:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:05.867 03:57:20 -- common/autotest_common.sh@859 -- # break 00:06:05.867 03:57:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:05.867 03:57:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:05.867 03:57:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.867 1+0 records in 00:06:05.867 1+0 records out 00:06:05.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198855 s, 20.6 MB/s 00:06:05.867 03:57:20 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.867 03:57:20 -- common/autotest_common.sh@872 -- # size=4096 00:06:05.867 03:57:20 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.868 03:57:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:05.868 03:57:20 -- common/autotest_common.sh@875 -- # return 0 00:06:05.868 03:57:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.868 03:57:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.868 03:57:20 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.868 03:57:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.868 03:57:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.128 { 00:06:06.128 "nbd_device": "/dev/nbd0", 00:06:06.128 "bdev_name": "Malloc0" 00:06:06.128 }, 00:06:06.128 { 00:06:06.128 "nbd_device": "/dev/nbd1", 00:06:06.128 "bdev_name": "Malloc1" 00:06:06.128 } 00:06:06.128 ]' 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.128 { 00:06:06.128 "nbd_device": "/dev/nbd0", 00:06:06.128 "bdev_name": "Malloc0" 00:06:06.128 }, 00:06:06.128 { 00:06:06.128 "nbd_device": "/dev/nbd1", 00:06:06.128 "bdev_name": "Malloc1" 00:06:06.128 } 00:06:06.128 ]' 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.128 /dev/nbd1' 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.128 /dev/nbd1' 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.128 256+0 records in 00:06:06.128 256+0 records out 00:06:06.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105463 s, 99.4 MB/s 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.128 256+0 records in 00:06:06.128 256+0 records out 00:06:06.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129178 s, 81.2 MB/s 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.128 256+0 records in 00:06:06.128 256+0 records out 00:06:06.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139215 s, 75.3 MB/s 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@51 -- # local i 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.128 03:57:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@41 -- # break 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@41 -- # break 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.388 03:57:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@65 -- # true 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.647 03:57:21 -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.647 03:57:21 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.907 03:57:21 -- event/event.sh@35 -- # sleep 3 00:06:07.167 [2024-04-19 03:57:21.466995] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.167 [2024-04-19 03:57:21.528195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.167 [2024-04-19 03:57:21.528199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.167 [2024-04-19 03:57:21.568495] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.167 [2024-04-19 03:57:21.568536] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.458 03:57:24 -- event/event.sh@23 -- # for i in {0..2} 00:06:10.458 03:57:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:10.458 spdk_app_start Round 1 00:06:10.458 03:57:24 -- event/event.sh@25 -- # waitforlisten 137483 /var/tmp/spdk-nbd.sock 00:06:10.458 03:57:24 -- common/autotest_common.sh@817 -- # '[' -z 137483 ']' 00:06:10.458 03:57:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.458 03:57:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:10.458 03:57:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.458 03:57:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:10.458 03:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:10.458 03:57:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:10.458 03:57:24 -- common/autotest_common.sh@850 -- # return 0 00:06:10.458 03:57:24 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.458 Malloc0 00:06:10.458 03:57:24 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.458 Malloc1 00:06:10.458 03:57:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@12 -- # local i 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.458 /dev/nbd0 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.458 03:57:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:10.458 03:57:24 -- common/autotest_common.sh@855 -- # local i 00:06:10.458 03:57:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:10.458 03:57:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:10.458 03:57:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:10.458 03:57:24 -- common/autotest_common.sh@859 -- # break 00:06:10.458 03:57:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:10.458 03:57:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:10.458 03:57:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.458 1+0 records in 00:06:10.458 1+0 records out 00:06:10.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229847 s, 17.8 MB/s 00:06:10.458 03:57:24 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:10.458 03:57:24 -- common/autotest_common.sh@872 -- # size=4096 00:06:10.458 03:57:24 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:10.458 03:57:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:10.458 03:57:24 -- common/autotest_common.sh@875 -- # return 0 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.458 03:57:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.459 03:57:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.718 /dev/nbd1 00:06:10.718 03:57:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.719 03:57:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.719 03:57:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:10.719 03:57:25 -- common/autotest_common.sh@855 -- # local i 00:06:10.719 03:57:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:10.719 03:57:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:10.719 03:57:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:10.719 03:57:25 -- common/autotest_common.sh@859 -- # break 00:06:10.719 03:57:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:10.719 03:57:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:10.719 03:57:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.719 1+0 records in 00:06:10.719 1+0 records out 00:06:10.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000105313 s, 38.9 MB/s 00:06:10.719 03:57:25 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:10.719 03:57:25 -- common/autotest_common.sh@872 -- # size=4096 00:06:10.719 03:57:25 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:10.719 03:57:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:10.719 03:57:25 -- common/autotest_common.sh@875 -- # return 0 00:06:10.719 03:57:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.719 03:57:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.719 03:57:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.719 03:57:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.719 03:57:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.978 { 00:06:10.978 "nbd_device": "/dev/nbd0", 00:06:10.978 "bdev_name": "Malloc0" 00:06:10.978 }, 00:06:10.978 { 00:06:10.978 "nbd_device": "/dev/nbd1", 00:06:10.978 "bdev_name": "Malloc1" 00:06:10.978 } 00:06:10.978 ]' 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.978 { 00:06:10.978 "nbd_device": "/dev/nbd0", 00:06:10.978 "bdev_name": "Malloc0" 00:06:10.978 }, 00:06:10.978 { 00:06:10.978 "nbd_device": "/dev/nbd1", 00:06:10.978 "bdev_name": "Malloc1" 00:06:10.978 } 00:06:10.978 ]' 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.978 /dev/nbd1' 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.978 /dev/nbd1' 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.978 256+0 records in 00:06:10.978 256+0 records out 00:06:10.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103837 s, 101 MB/s 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.978 256+0 records in 00:06:10.978 256+0 records out 00:06:10.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129145 s, 81.2 MB/s 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.978 256+0 records in 00:06:10.978 256+0 records out 00:06:10.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014061 s, 74.6 MB/s 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@51 -- # local i 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.978 03:57:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@41 -- # break 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@41 -- # break 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.238 03:57:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@65 -- # true 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.498 03:57:25 -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.498 03:57:25 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.758 03:57:26 -- event/event.sh@35 -- # sleep 3 00:06:12.017 [2024-04-19 03:57:26.332262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.017 [2024-04-19 03:57:26.387351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.017 [2024-04-19 03:57:26.387353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.017 [2024-04-19 03:57:26.428388] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.017 [2024-04-19 03:57:26.428431] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.310 03:57:29 -- event/event.sh@23 -- # for i in {0..2} 00:06:15.310 03:57:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:15.310 spdk_app_start Round 2 00:06:15.310 03:57:29 -- event/event.sh@25 -- # waitforlisten 137483 /var/tmp/spdk-nbd.sock 00:06:15.310 03:57:29 -- common/autotest_common.sh@817 -- # '[' -z 137483 ']' 00:06:15.310 03:57:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.310 03:57:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:15.310 03:57:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.310 03:57:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:15.310 03:57:29 -- common/autotest_common.sh@10 -- # set +x 00:06:15.310 03:57:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:15.310 03:57:29 -- common/autotest_common.sh@850 -- # return 0 00:06:15.310 03:57:29 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.310 Malloc0 00:06:15.310 03:57:29 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.310 Malloc1 00:06:15.310 03:57:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.310 03:57:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.310 03:57:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@12 -- # local i 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.311 03:57:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.311 /dev/nbd0 00:06:15.570 03:57:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.570 03:57:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.570 03:57:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:15.570 03:57:29 -- common/autotest_common.sh@855 -- # local i 00:06:15.570 03:57:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:15.570 03:57:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:15.570 03:57:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:15.570 03:57:29 -- common/autotest_common.sh@859 -- # break 00:06:15.570 03:57:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:15.570 03:57:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:15.570 03:57:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.570 1+0 records in 00:06:15.570 1+0 records out 00:06:15.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018238 s, 22.5 MB/s 00:06:15.570 03:57:29 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:15.570 03:57:29 -- common/autotest_common.sh@872 -- # size=4096 00:06:15.570 03:57:29 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:15.570 03:57:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:15.570 03:57:29 -- common/autotest_common.sh@875 -- # return 0 00:06:15.570 03:57:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.570 03:57:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.570 03:57:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.570 /dev/nbd1 00:06:15.570 03:57:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.570 03:57:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.570 03:57:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:15.570 03:57:30 -- common/autotest_common.sh@855 -- # local i 00:06:15.570 03:57:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:15.570 03:57:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:15.570 03:57:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:15.570 03:57:30 -- common/autotest_common.sh@859 -- # break 00:06:15.570 03:57:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:15.570 03:57:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:15.570 03:57:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.570 1+0 records in 00:06:15.570 1+0 records out 00:06:15.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197767 s, 20.7 MB/s 00:06:15.570 03:57:30 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:15.570 03:57:30 -- common/autotest_common.sh@872 -- # size=4096 00:06:15.570 03:57:30 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:15.570 03:57:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:15.570 03:57:30 -- common/autotest_common.sh@875 -- # return 0 00:06:15.570 03:57:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.570 03:57:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.570 03:57:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.570 03:57:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.570 03:57:30 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.831 { 00:06:15.831 "nbd_device": "/dev/nbd0", 00:06:15.831 "bdev_name": "Malloc0" 00:06:15.831 }, 00:06:15.831 { 00:06:15.831 "nbd_device": "/dev/nbd1", 00:06:15.831 "bdev_name": "Malloc1" 00:06:15.831 } 00:06:15.831 ]' 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.831 { 00:06:15.831 "nbd_device": "/dev/nbd0", 00:06:15.831 "bdev_name": "Malloc0" 00:06:15.831 }, 00:06:15.831 { 00:06:15.831 "nbd_device": "/dev/nbd1", 00:06:15.831 "bdev_name": "Malloc1" 00:06:15.831 } 00:06:15.831 ]' 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.831 /dev/nbd1' 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.831 /dev/nbd1' 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.831 256+0 records in 00:06:15.831 256+0 records out 00:06:15.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102806 s, 102 MB/s 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.831 256+0 records in 00:06:15.831 256+0 records out 00:06:15.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131178 s, 79.9 MB/s 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.831 256+0 records in 00:06:15.831 256+0 records out 00:06:15.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135774 s, 77.2 MB/s 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@51 -- # local i 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.831 03:57:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.091 03:57:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.091 03:57:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.091 03:57:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.091 03:57:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.091 03:57:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.091 03:57:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.091 03:57:30 -- bdev/nbd_common.sh@41 -- # break 00:06:16.091 03:57:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.091 03:57:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.091 03:57:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@41 -- # break 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@65 -- # true 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.350 03:57:30 -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.350 03:57:30 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.610 03:57:31 -- event/event.sh@35 -- # sleep 3 00:06:16.869 [2024-04-19 03:57:31.222440] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.869 [2024-04-19 03:57:31.279145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.869 [2024-04-19 03:57:31.279147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.869 [2024-04-19 03:57:31.319808] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.869 [2024-04-19 03:57:31.319846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.167 03:57:34 -- event/event.sh@38 -- # waitforlisten 137483 /var/tmp/spdk-nbd.sock 00:06:20.167 03:57:34 -- common/autotest_common.sh@817 -- # '[' -z 137483 ']' 00:06:20.167 03:57:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.167 03:57:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:20.167 03:57:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.167 03:57:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:20.167 03:57:34 -- common/autotest_common.sh@10 -- # set +x 00:06:20.167 03:57:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:20.167 03:57:34 -- common/autotest_common.sh@850 -- # return 0 00:06:20.167 03:57:34 -- event/event.sh@39 -- # killprocess 137483 00:06:20.167 03:57:34 -- common/autotest_common.sh@936 -- # '[' -z 137483 ']' 00:06:20.167 03:57:34 -- common/autotest_common.sh@940 -- # kill -0 137483 00:06:20.167 03:57:34 -- common/autotest_common.sh@941 -- # uname 00:06:20.167 03:57:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.167 03:57:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137483 00:06:20.167 03:57:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.167 03:57:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.167 03:57:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137483' 00:06:20.167 killing process with pid 137483 00:06:20.167 03:57:34 -- common/autotest_common.sh@955 -- # kill 137483 00:06:20.167 03:57:34 -- common/autotest_common.sh@960 -- # wait 137483 00:06:20.167 spdk_app_start is called in Round 0. 00:06:20.167 Shutdown signal received, stop current app iteration 00:06:20.167 Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 reinitialization... 00:06:20.167 spdk_app_start is called in Round 1. 00:06:20.167 Shutdown signal received, stop current app iteration 00:06:20.167 Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 reinitialization... 00:06:20.167 spdk_app_start is called in Round 2. 00:06:20.167 Shutdown signal received, stop current app iteration 00:06:20.167 Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 reinitialization... 00:06:20.167 spdk_app_start is called in Round 3. 00:06:20.167 Shutdown signal received, stop current app iteration 00:06:20.167 03:57:34 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:20.167 03:57:34 -- event/event.sh@42 -- # return 0 00:06:20.167 00:06:20.167 real 0m15.074s 00:06:20.167 user 0m32.404s 00:06:20.167 sys 0m2.158s 00:06:20.167 03:57:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.167 03:57:34 -- common/autotest_common.sh@10 -- # set +x 00:06:20.167 ************************************ 00:06:20.167 END TEST app_repeat 00:06:20.167 ************************************ 00:06:20.167 03:57:34 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:20.167 03:57:34 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:20.167 03:57:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.167 03:57:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.167 03:57:34 -- common/autotest_common.sh@10 -- # set +x 00:06:20.167 ************************************ 00:06:20.167 START TEST cpu_locks 00:06:20.167 ************************************ 00:06:20.167 03:57:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:20.167 * Looking for test storage... 00:06:20.167 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:20.167 03:57:34 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:20.167 03:57:34 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:20.167 03:57:34 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:20.167 03:57:34 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:20.167 03:57:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.167 03:57:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.167 03:57:34 -- common/autotest_common.sh@10 -- # set +x 00:06:20.427 ************************************ 00:06:20.427 START TEST default_locks 00:06:20.427 ************************************ 00:06:20.427 03:57:34 -- common/autotest_common.sh@1111 -- # default_locks 00:06:20.427 03:57:34 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=140577 00:06:20.427 03:57:34 -- event/cpu_locks.sh@47 -- # waitforlisten 140577 00:06:20.427 03:57:34 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.427 03:57:34 -- common/autotest_common.sh@817 -- # '[' -z 140577 ']' 00:06:20.427 03:57:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.427 03:57:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:20.427 03:57:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.427 03:57:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:20.427 03:57:34 -- common/autotest_common.sh@10 -- # set +x 00:06:20.427 [2024-04-19 03:57:34.811112] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:20.427 [2024-04-19 03:57:34.811149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140577 ] 00:06:20.427 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.427 [2024-04-19 03:57:34.860608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.427 [2024-04-19 03:57:34.926556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.365 03:57:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.365 03:57:35 -- common/autotest_common.sh@850 -- # return 0 00:06:21.365 03:57:35 -- event/cpu_locks.sh@49 -- # locks_exist 140577 00:06:21.365 03:57:35 -- event/cpu_locks.sh@22 -- # lslocks -p 140577 00:06:21.365 03:57:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.625 lslocks: write error 00:06:21.625 03:57:35 -- event/cpu_locks.sh@50 -- # killprocess 140577 00:06:21.625 03:57:35 -- common/autotest_common.sh@936 -- # '[' -z 140577 ']' 00:06:21.625 03:57:35 -- common/autotest_common.sh@940 -- # kill -0 140577 00:06:21.625 03:57:35 -- common/autotest_common.sh@941 -- # uname 00:06:21.625 03:57:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.625 03:57:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140577 00:06:21.625 03:57:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.625 03:57:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.625 03:57:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140577' 00:06:21.625 killing process with pid 140577 00:06:21.625 03:57:35 -- common/autotest_common.sh@955 -- # kill 140577 00:06:21.625 03:57:35 -- common/autotest_common.sh@960 -- # wait 140577 00:06:21.885 03:57:36 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 140577 00:06:21.885 03:57:36 -- common/autotest_common.sh@638 -- # local es=0 00:06:21.885 03:57:36 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 140577 00:06:21.885 03:57:36 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:21.885 03:57:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:21.885 03:57:36 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:21.885 03:57:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:21.885 03:57:36 -- common/autotest_common.sh@641 -- # waitforlisten 140577 00:06:21.885 03:57:36 -- common/autotest_common.sh@817 -- # '[' -z 140577 ']' 00:06:21.885 03:57:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.885 03:57:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:21.885 03:57:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.885 03:57:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:21.885 03:57:36 -- common/autotest_common.sh@10 -- # set +x 00:06:21.885 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (140577) - No such process 00:06:21.885 ERROR: process (pid: 140577) is no longer running 00:06:21.885 03:57:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.885 03:57:36 -- common/autotest_common.sh@850 -- # return 1 00:06:21.885 03:57:36 -- common/autotest_common.sh@641 -- # es=1 00:06:21.885 03:57:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:21.885 03:57:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:21.885 03:57:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:21.885 03:57:36 -- event/cpu_locks.sh@54 -- # no_locks 00:06:21.885 03:57:36 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.885 03:57:36 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.885 03:57:36 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.885 00:06:21.885 real 0m1.546s 00:06:21.885 user 0m1.590s 00:06:21.885 sys 0m0.506s 00:06:21.885 03:57:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.885 03:57:36 -- common/autotest_common.sh@10 -- # set +x 00:06:21.885 ************************************ 00:06:21.885 END TEST default_locks 00:06:21.885 ************************************ 00:06:21.885 03:57:36 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:21.885 03:57:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.886 03:57:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.886 03:57:36 -- common/autotest_common.sh@10 -- # set +x 00:06:22.145 ************************************ 00:06:22.145 START TEST default_locks_via_rpc 00:06:22.145 ************************************ 00:06:22.145 03:57:36 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:22.145 03:57:36 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=140909 00:06:22.145 03:57:36 -- event/cpu_locks.sh@63 -- # waitforlisten 140909 00:06:22.145 03:57:36 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.145 03:57:36 -- common/autotest_common.sh@817 -- # '[' -z 140909 ']' 00:06:22.145 03:57:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.145 03:57:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:22.145 03:57:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.145 03:57:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:22.145 03:57:36 -- common/autotest_common.sh@10 -- # set +x 00:06:22.145 [2024-04-19 03:57:36.516203] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:22.145 [2024-04-19 03:57:36.516243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140909 ] 00:06:22.145 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.145 [2024-04-19 03:57:36.568373] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.145 [2024-04-19 03:57:36.633242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.083 03:57:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:23.083 03:57:37 -- common/autotest_common.sh@850 -- # return 0 00:06:23.083 03:57:37 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:23.083 03:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.083 03:57:37 -- common/autotest_common.sh@10 -- # set +x 00:06:23.083 03:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.083 03:57:37 -- event/cpu_locks.sh@67 -- # no_locks 00:06:23.083 03:57:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.083 03:57:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.083 03:57:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.083 03:57:37 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.083 03:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.083 03:57:37 -- common/autotest_common.sh@10 -- # set +x 00:06:23.083 03:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.083 03:57:37 -- event/cpu_locks.sh@71 -- # locks_exist 140909 00:06:23.083 03:57:37 -- event/cpu_locks.sh@22 -- # lslocks -p 140909 00:06:23.083 03:57:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.342 03:57:37 -- event/cpu_locks.sh@73 -- # killprocess 140909 00:06:23.342 03:57:37 -- common/autotest_common.sh@936 -- # '[' -z 140909 ']' 00:06:23.342 03:57:37 -- common/autotest_common.sh@940 -- # kill -0 140909 00:06:23.342 03:57:37 -- common/autotest_common.sh@941 -- # uname 00:06:23.342 03:57:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.342 03:57:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140909 00:06:23.342 03:57:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.342 03:57:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.342 03:57:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140909' 00:06:23.342 killing process with pid 140909 00:06:23.342 03:57:37 -- common/autotest_common.sh@955 -- # kill 140909 00:06:23.342 03:57:37 -- common/autotest_common.sh@960 -- # wait 140909 00:06:23.602 00:06:23.602 real 0m1.548s 00:06:23.602 user 0m1.610s 00:06:23.602 sys 0m0.491s 00:06:23.602 03:57:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.602 03:57:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.602 ************************************ 00:06:23.602 END TEST default_locks_via_rpc 00:06:23.602 ************************************ 00:06:23.602 03:57:38 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:23.602 03:57:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.602 03:57:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.602 03:57:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.862 ************************************ 00:06:23.862 START TEST non_locking_app_on_locked_coremask 00:06:23.862 ************************************ 00:06:23.862 03:57:38 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:23.862 03:57:38 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=141206 00:06:23.862 03:57:38 -- event/cpu_locks.sh@81 -- # waitforlisten 141206 /var/tmp/spdk.sock 00:06:23.862 03:57:38 -- common/autotest_common.sh@817 -- # '[' -z 141206 ']' 00:06:23.862 03:57:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.862 03:57:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:23.862 03:57:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.862 03:57:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:23.862 03:57:38 -- common/autotest_common.sh@10 -- # set +x 00:06:23.862 03:57:38 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.862 [2024-04-19 03:57:38.217136] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:23.862 [2024-04-19 03:57:38.217177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141206 ] 00:06:23.862 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.862 [2024-04-19 03:57:38.267759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.862 [2024-04-19 03:57:38.342566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.800 03:57:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:24.800 03:57:38 -- common/autotest_common.sh@850 -- # return 0 00:06:24.800 03:57:38 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=141410 00:06:24.800 03:57:38 -- event/cpu_locks.sh@85 -- # waitforlisten 141410 /var/tmp/spdk2.sock 00:06:24.800 03:57:38 -- common/autotest_common.sh@817 -- # '[' -z 141410 ']' 00:06:24.800 03:57:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.800 03:57:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:24.800 03:57:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.800 03:57:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:24.800 03:57:38 -- common/autotest_common.sh@10 -- # set +x 00:06:24.800 03:57:38 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:24.800 [2024-04-19 03:57:39.025267] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:24.800 [2024-04-19 03:57:39.025310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141410 ] 00:06:24.800 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.800 [2024-04-19 03:57:39.089448] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.800 [2024-04-19 03:57:39.089470] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.800 [2024-04-19 03:57:39.223991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.368 03:57:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:25.368 03:57:39 -- common/autotest_common.sh@850 -- # return 0 00:06:25.368 03:57:39 -- event/cpu_locks.sh@87 -- # locks_exist 141206 00:06:25.368 03:57:39 -- event/cpu_locks.sh@22 -- # lslocks -p 141206 00:06:25.368 03:57:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.627 lslocks: write error 00:06:25.627 03:57:40 -- event/cpu_locks.sh@89 -- # killprocess 141206 00:06:25.887 03:57:40 -- common/autotest_common.sh@936 -- # '[' -z 141206 ']' 00:06:25.887 03:57:40 -- common/autotest_common.sh@940 -- # kill -0 141206 00:06:25.887 03:57:40 -- common/autotest_common.sh@941 -- # uname 00:06:25.887 03:57:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.887 03:57:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141206 00:06:25.887 03:57:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:25.887 03:57:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:25.887 03:57:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141206' 00:06:25.887 killing process with pid 141206 00:06:25.887 03:57:40 -- common/autotest_common.sh@955 -- # kill 141206 00:06:25.887 03:57:40 -- common/autotest_common.sh@960 -- # wait 141206 00:06:26.454 03:57:40 -- event/cpu_locks.sh@90 -- # killprocess 141410 00:06:26.454 03:57:40 -- common/autotest_common.sh@936 -- # '[' -z 141410 ']' 00:06:26.454 03:57:40 -- common/autotest_common.sh@940 -- # kill -0 141410 00:06:26.454 03:57:40 -- common/autotest_common.sh@941 -- # uname 00:06:26.454 03:57:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.454 03:57:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141410 00:06:26.454 03:57:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:26.454 03:57:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:26.454 03:57:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141410' 00:06:26.454 killing process with pid 141410 00:06:26.454 03:57:40 -- common/autotest_common.sh@955 -- # kill 141410 00:06:26.454 03:57:40 -- common/autotest_common.sh@960 -- # wait 141410 00:06:26.712 00:06:26.712 real 0m3.049s 00:06:26.712 user 0m3.232s 00:06:26.712 sys 0m0.822s 00:06:26.712 03:57:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.712 03:57:41 -- common/autotest_common.sh@10 -- # set +x 00:06:26.712 ************************************ 00:06:26.712 END TEST non_locking_app_on_locked_coremask 00:06:26.712 ************************************ 00:06:26.972 03:57:41 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:26.972 03:57:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.972 03:57:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.972 03:57:41 -- common/autotest_common.sh@10 -- # set +x 00:06:26.972 ************************************ 00:06:26.972 START TEST locking_app_on_unlocked_coremask 00:06:26.972 ************************************ 00:06:26.972 03:57:41 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:26.972 03:57:41 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=141791 00:06:26.972 03:57:41 -- event/cpu_locks.sh@99 -- # waitforlisten 141791 /var/tmp/spdk.sock 00:06:26.972 03:57:41 -- common/autotest_common.sh@817 -- # '[' -z 141791 ']' 00:06:26.972 03:57:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.972 03:57:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.972 03:57:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.972 03:57:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.972 03:57:41 -- common/autotest_common.sh@10 -- # set +x 00:06:26.972 03:57:41 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:26.972 [2024-04-19 03:57:41.424952] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:26.972 [2024-04-19 03:57:41.424988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141791 ] 00:06:26.972 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.972 [2024-04-19 03:57:41.473852] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.972 [2024-04-19 03:57:41.473875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.232 [2024-04-19 03:57:41.546786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.801 03:57:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:27.801 03:57:42 -- common/autotest_common.sh@850 -- # return 0 00:06:27.801 03:57:42 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=142042 00:06:27.801 03:57:42 -- event/cpu_locks.sh@103 -- # waitforlisten 142042 /var/tmp/spdk2.sock 00:06:27.801 03:57:42 -- common/autotest_common.sh@817 -- # '[' -z 142042 ']' 00:06:27.801 03:57:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.801 03:57:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:27.801 03:57:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.801 03:57:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:27.801 03:57:42 -- common/autotest_common.sh@10 -- # set +x 00:06:27.801 03:57:42 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.801 [2024-04-19 03:57:42.225336] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:27.801 [2024-04-19 03:57:42.225378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142042 ] 00:06:27.801 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.801 [2024-04-19 03:57:42.292023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.060 [2024-04-19 03:57:42.437968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.629 03:57:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:28.629 03:57:42 -- common/autotest_common.sh@850 -- # return 0 00:06:28.629 03:57:42 -- event/cpu_locks.sh@105 -- # locks_exist 142042 00:06:28.629 03:57:42 -- event/cpu_locks.sh@22 -- # lslocks -p 142042 00:06:28.629 03:57:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.888 lslocks: write error 00:06:28.888 03:57:43 -- event/cpu_locks.sh@107 -- # killprocess 141791 00:06:28.888 03:57:43 -- common/autotest_common.sh@936 -- # '[' -z 141791 ']' 00:06:28.888 03:57:43 -- common/autotest_common.sh@940 -- # kill -0 141791 00:06:28.888 03:57:43 -- common/autotest_common.sh@941 -- # uname 00:06:28.888 03:57:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.888 03:57:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141791 00:06:28.888 03:57:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.888 03:57:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.888 03:57:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141791' 00:06:28.888 killing process with pid 141791 00:06:28.888 03:57:43 -- common/autotest_common.sh@955 -- # kill 141791 00:06:28.888 03:57:43 -- common/autotest_common.sh@960 -- # wait 141791 00:06:29.457 03:57:43 -- event/cpu_locks.sh@108 -- # killprocess 142042 00:06:29.457 03:57:43 -- common/autotest_common.sh@936 -- # '[' -z 142042 ']' 00:06:29.457 03:57:43 -- common/autotest_common.sh@940 -- # kill -0 142042 00:06:29.457 03:57:43 -- common/autotest_common.sh@941 -- # uname 00:06:29.457 03:57:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.457 03:57:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142042 00:06:29.716 03:57:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.716 03:57:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.716 03:57:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142042' 00:06:29.716 killing process with pid 142042 00:06:29.716 03:57:44 -- common/autotest_common.sh@955 -- # kill 142042 00:06:29.716 03:57:44 -- common/autotest_common.sh@960 -- # wait 142042 00:06:29.976 00:06:29.976 real 0m2.949s 00:06:29.976 user 0m3.110s 00:06:29.976 sys 0m0.786s 00:06:29.976 03:57:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.976 03:57:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.976 ************************************ 00:06:29.976 END TEST locking_app_on_unlocked_coremask 00:06:29.976 ************************************ 00:06:29.976 03:57:44 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:29.976 03:57:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.976 03:57:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.976 03:57:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.976 ************************************ 00:06:29.976 START TEST locking_app_on_locked_coremask 00:06:29.976 ************************************ 00:06:29.976 03:57:44 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:06:29.976 03:57:44 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=142407 00:06:29.976 03:57:44 -- event/cpu_locks.sh@116 -- # waitforlisten 142407 /var/tmp/spdk.sock 00:06:29.976 03:57:44 -- common/autotest_common.sh@817 -- # '[' -z 142407 ']' 00:06:29.976 03:57:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.976 03:57:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:29.976 03:57:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.976 03:57:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:29.976 03:57:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.976 03:57:44 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.236 [2024-04-19 03:57:44.534829] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:30.236 [2024-04-19 03:57:44.534868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142407 ] 00:06:30.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.236 [2024-04-19 03:57:44.584301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.236 [2024-04-19 03:57:44.658113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.805 03:57:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:30.805 03:57:45 -- common/autotest_common.sh@850 -- # return 0 00:06:30.805 03:57:45 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=142618 00:06:30.805 03:57:45 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 142618 /var/tmp/spdk2.sock 00:06:30.805 03:57:45 -- common/autotest_common.sh@638 -- # local es=0 00:06:30.805 03:57:45 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 142618 /var/tmp/spdk2.sock 00:06:30.805 03:57:45 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:30.805 03:57:45 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.805 03:57:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.805 03:57:45 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:30.805 03:57:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.805 03:57:45 -- common/autotest_common.sh@641 -- # waitforlisten 142618 /var/tmp/spdk2.sock 00:06:30.805 03:57:45 -- common/autotest_common.sh@817 -- # '[' -z 142618 ']' 00:06:30.805 03:57:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.805 03:57:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:30.805 03:57:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.805 03:57:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:30.805 03:57:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.064 [2024-04-19 03:57:45.343600] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:31.064 [2024-04-19 03:57:45.343642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142618 ] 00:06:31.064 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.064 [2024-04-19 03:57:45.412219] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 142407 has claimed it. 00:06:31.064 [2024-04-19 03:57:45.412250] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (142618) - No such process 00:06:31.633 ERROR: process (pid: 142618) is no longer running 00:06:31.633 03:57:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:31.633 03:57:45 -- common/autotest_common.sh@850 -- # return 1 00:06:31.633 03:57:45 -- common/autotest_common.sh@641 -- # es=1 00:06:31.633 03:57:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:31.633 03:57:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:31.633 03:57:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:31.633 03:57:45 -- event/cpu_locks.sh@122 -- # locks_exist 142407 00:06:31.633 03:57:45 -- event/cpu_locks.sh@22 -- # lslocks -p 142407 00:06:31.633 03:57:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.893 lslocks: write error 00:06:31.893 03:57:46 -- event/cpu_locks.sh@124 -- # killprocess 142407 00:06:31.893 03:57:46 -- common/autotest_common.sh@936 -- # '[' -z 142407 ']' 00:06:31.893 03:57:46 -- common/autotest_common.sh@940 -- # kill -0 142407 00:06:31.893 03:57:46 -- common/autotest_common.sh@941 -- # uname 00:06:31.893 03:57:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.893 03:57:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142407 00:06:31.893 03:57:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:31.893 03:57:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:31.893 03:57:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142407' 00:06:31.893 killing process with pid 142407 00:06:31.893 03:57:46 -- common/autotest_common.sh@955 -- # kill 142407 00:06:31.893 03:57:46 -- common/autotest_common.sh@960 -- # wait 142407 00:06:32.153 00:06:32.153 real 0m2.181s 00:06:32.153 user 0m2.377s 00:06:32.153 sys 0m0.563s 00:06:32.153 03:57:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.153 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.153 ************************************ 00:06:32.153 END TEST locking_app_on_locked_coremask 00:06:32.153 ************************************ 00:06:32.412 03:57:46 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:32.412 03:57:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.412 03:57:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.412 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.412 ************************************ 00:06:32.412 START TEST locking_overlapped_coremask 00:06:32.412 ************************************ 00:06:32.412 03:57:46 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:06:32.412 03:57:46 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=142922 00:06:32.412 03:57:46 -- event/cpu_locks.sh@133 -- # waitforlisten 142922 /var/tmp/spdk.sock 00:06:32.412 03:57:46 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:32.412 03:57:46 -- common/autotest_common.sh@817 -- # '[' -z 142922 ']' 00:06:32.412 03:57:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.412 03:57:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:32.412 03:57:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.412 03:57:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:32.412 03:57:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.412 [2024-04-19 03:57:46.870821] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:32.412 [2024-04-19 03:57:46.870858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142922 ] 00:06:32.412 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.412 [2024-04-19 03:57:46.923968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.671 [2024-04-19 03:57:46.992776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.671 [2024-04-19 03:57:46.992859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.671 [2024-04-19 03:57:46.992860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.240 03:57:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:33.240 03:57:47 -- common/autotest_common.sh@850 -- # return 0 00:06:33.240 03:57:47 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:33.240 03:57:47 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=143162 00:06:33.240 03:57:47 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 143162 /var/tmp/spdk2.sock 00:06:33.240 03:57:47 -- common/autotest_common.sh@638 -- # local es=0 00:06:33.240 03:57:47 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 143162 /var/tmp/spdk2.sock 00:06:33.240 03:57:47 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:33.240 03:57:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:33.240 03:57:47 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:33.240 03:57:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:33.240 03:57:47 -- common/autotest_common.sh@641 -- # waitforlisten 143162 /var/tmp/spdk2.sock 00:06:33.240 03:57:47 -- common/autotest_common.sh@817 -- # '[' -z 143162 ']' 00:06:33.240 03:57:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.240 03:57:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:33.240 03:57:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.240 03:57:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:33.240 03:57:47 -- common/autotest_common.sh@10 -- # set +x 00:06:33.240 [2024-04-19 03:57:47.678480] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:33.240 [2024-04-19 03:57:47.678523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143162 ] 00:06:33.240 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.240 [2024-04-19 03:57:47.751916] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 142922 has claimed it. 00:06:33.240 [2024-04-19 03:57:47.751953] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:33.808 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (143162) - No such process 00:06:33.808 ERROR: process (pid: 143162) is no longer running 00:06:33.808 03:57:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:33.808 03:57:48 -- common/autotest_common.sh@850 -- # return 1 00:06:33.808 03:57:48 -- common/autotest_common.sh@641 -- # es=1 00:06:33.808 03:57:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:33.808 03:57:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:33.808 03:57:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:33.808 03:57:48 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:33.808 03:57:48 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.808 03:57:48 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.808 03:57:48 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.808 03:57:48 -- event/cpu_locks.sh@141 -- # killprocess 142922 00:06:33.808 03:57:48 -- common/autotest_common.sh@936 -- # '[' -z 142922 ']' 00:06:33.808 03:57:48 -- common/autotest_common.sh@940 -- # kill -0 142922 00:06:33.808 03:57:48 -- common/autotest_common.sh@941 -- # uname 00:06:33.808 03:57:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.808 03:57:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142922 00:06:34.067 03:57:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:34.067 03:57:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:34.067 03:57:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142922' 00:06:34.067 killing process with pid 142922 00:06:34.067 03:57:48 -- common/autotest_common.sh@955 -- # kill 142922 00:06:34.067 03:57:48 -- common/autotest_common.sh@960 -- # wait 142922 00:06:34.327 00:06:34.327 real 0m1.855s 00:06:34.327 user 0m5.190s 00:06:34.327 sys 0m0.381s 00:06:34.327 03:57:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.327 03:57:48 -- common/autotest_common.sh@10 -- # set +x 00:06:34.327 ************************************ 00:06:34.327 END TEST locking_overlapped_coremask 00:06:34.327 ************************************ 00:06:34.327 03:57:48 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:34.327 03:57:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.327 03:57:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.327 03:57:48 -- common/autotest_common.sh@10 -- # set +x 00:06:34.586 ************************************ 00:06:34.586 START TEST locking_overlapped_coremask_via_rpc 00:06:34.586 ************************************ 00:06:34.586 03:57:48 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:34.586 03:57:48 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=143312 00:06:34.586 03:57:48 -- event/cpu_locks.sh@149 -- # waitforlisten 143312 /var/tmp/spdk.sock 00:06:34.586 03:57:48 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:34.586 03:57:48 -- common/autotest_common.sh@817 -- # '[' -z 143312 ']' 00:06:34.586 03:57:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.586 03:57:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:34.586 03:57:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.586 03:57:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:34.586 03:57:48 -- common/autotest_common.sh@10 -- # set +x 00:06:34.587 [2024-04-19 03:57:48.904676] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:34.587 [2024-04-19 03:57:48.904720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143312 ] 00:06:34.587 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.587 [2024-04-19 03:57:48.957663] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.587 [2024-04-19 03:57:48.957690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.587 [2024-04-19 03:57:49.031255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.587 [2024-04-19 03:57:49.031267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.587 [2024-04-19 03:57:49.031269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.155 03:57:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:35.155 03:57:49 -- common/autotest_common.sh@850 -- # return 0 00:06:35.155 03:57:49 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=143498 00:06:35.155 03:57:49 -- event/cpu_locks.sh@153 -- # waitforlisten 143498 /var/tmp/spdk2.sock 00:06:35.155 03:57:49 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:35.155 03:57:49 -- common/autotest_common.sh@817 -- # '[' -z 143498 ']' 00:06:35.155 03:57:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.155 03:57:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:35.155 03:57:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.155 03:57:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:35.155 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.414 [2024-04-19 03:57:49.721520] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:35.414 [2024-04-19 03:57:49.721566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143498 ] 00:06:35.414 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.414 [2024-04-19 03:57:49.793860] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.414 [2024-04-19 03:57:49.793888] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.414 [2024-04-19 03:57:49.930983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.414 [2024-04-19 03:57:49.931092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.414 [2024-04-19 03:57:49.931093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:35.980 03:57:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:35.980 03:57:50 -- common/autotest_common.sh@850 -- # return 0 00:06:35.980 03:57:50 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:35.980 03:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:35.980 03:57:50 -- common/autotest_common.sh@10 -- # set +x 00:06:35.980 03:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:35.980 03:57:50 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:35.980 03:57:50 -- common/autotest_common.sh@638 -- # local es=0 00:06:35.980 03:57:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:35.980 03:57:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:35.980 03:57:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:35.980 03:57:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:35.980 03:57:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:35.980 03:57:50 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:35.980 03:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:35.980 03:57:50 -- common/autotest_common.sh@10 -- # set +x 00:06:35.980 [2024-04-19 03:57:50.507473] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 143312 has claimed it. 00:06:36.239 request: 00:06:36.239 { 00:06:36.239 "method": "framework_enable_cpumask_locks", 00:06:36.239 "req_id": 1 00:06:36.239 } 00:06:36.239 Got JSON-RPC error response 00:06:36.239 response: 00:06:36.239 { 00:06:36.239 "code": -32603, 00:06:36.239 "message": "Failed to claim CPU core: 2" 00:06:36.239 } 00:06:36.239 03:57:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:36.239 03:57:50 -- common/autotest_common.sh@641 -- # es=1 00:06:36.239 03:57:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:36.239 03:57:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:36.239 03:57:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:36.239 03:57:50 -- event/cpu_locks.sh@158 -- # waitforlisten 143312 /var/tmp/spdk.sock 00:06:36.239 03:57:50 -- common/autotest_common.sh@817 -- # '[' -z 143312 ']' 00:06:36.239 03:57:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.239 03:57:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:36.239 03:57:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.239 03:57:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:36.239 03:57:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.239 03:57:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.239 03:57:50 -- common/autotest_common.sh@850 -- # return 0 00:06:36.239 03:57:50 -- event/cpu_locks.sh@159 -- # waitforlisten 143498 /var/tmp/spdk2.sock 00:06:36.239 03:57:50 -- common/autotest_common.sh@817 -- # '[' -z 143498 ']' 00:06:36.239 03:57:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.239 03:57:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:36.239 03:57:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.239 03:57:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:36.239 03:57:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.498 03:57:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.498 03:57:50 -- common/autotest_common.sh@850 -- # return 0 00:06:36.498 03:57:50 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:36.498 03:57:50 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:36.498 03:57:50 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:36.498 03:57:50 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:36.498 00:06:36.498 real 0m2.003s 00:06:36.498 user 0m0.764s 00:06:36.498 sys 0m0.171s 00:06:36.498 03:57:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.498 03:57:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.498 ************************************ 00:06:36.498 END TEST locking_overlapped_coremask_via_rpc 00:06:36.498 ************************************ 00:06:36.498 03:57:50 -- event/cpu_locks.sh@174 -- # cleanup 00:06:36.498 03:57:50 -- event/cpu_locks.sh@15 -- # [[ -z 143312 ]] 00:06:36.498 03:57:50 -- event/cpu_locks.sh@15 -- # killprocess 143312 00:06:36.498 03:57:50 -- common/autotest_common.sh@936 -- # '[' -z 143312 ']' 00:06:36.498 03:57:50 -- common/autotest_common.sh@940 -- # kill -0 143312 00:06:36.498 03:57:50 -- common/autotest_common.sh@941 -- # uname 00:06:36.498 03:57:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.498 03:57:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 143312 00:06:36.498 03:57:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.498 03:57:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.498 03:57:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 143312' 00:06:36.498 killing process with pid 143312 00:06:36.498 03:57:50 -- common/autotest_common.sh@955 -- # kill 143312 00:06:36.498 03:57:50 -- common/autotest_common.sh@960 -- # wait 143312 00:06:36.757 03:57:51 -- event/cpu_locks.sh@16 -- # [[ -z 143498 ]] 00:06:36.757 03:57:51 -- event/cpu_locks.sh@16 -- # killprocess 143498 00:06:36.757 03:57:51 -- common/autotest_common.sh@936 -- # '[' -z 143498 ']' 00:06:36.757 03:57:51 -- common/autotest_common.sh@940 -- # kill -0 143498 00:06:36.757 03:57:51 -- common/autotest_common.sh@941 -- # uname 00:06:36.757 03:57:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.757 03:57:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 143498 00:06:37.016 03:57:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:37.016 03:57:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:37.016 03:57:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 143498' 00:06:37.016 killing process with pid 143498 00:06:37.016 03:57:51 -- common/autotest_common.sh@955 -- # kill 143498 00:06:37.016 03:57:51 -- common/autotest_common.sh@960 -- # wait 143498 00:06:37.275 03:57:51 -- event/cpu_locks.sh@18 -- # rm -f 00:06:37.275 03:57:51 -- event/cpu_locks.sh@1 -- # cleanup 00:06:37.275 03:57:51 -- event/cpu_locks.sh@15 -- # [[ -z 143312 ]] 00:06:37.275 03:57:51 -- event/cpu_locks.sh@15 -- # killprocess 143312 00:06:37.276 03:57:51 -- common/autotest_common.sh@936 -- # '[' -z 143312 ']' 00:06:37.276 03:57:51 -- common/autotest_common.sh@940 -- # kill -0 143312 00:06:37.276 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (143312) - No such process 00:06:37.276 03:57:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 143312 is not found' 00:06:37.276 Process with pid 143312 is not found 00:06:37.276 03:57:51 -- event/cpu_locks.sh@16 -- # [[ -z 143498 ]] 00:06:37.276 03:57:51 -- event/cpu_locks.sh@16 -- # killprocess 143498 00:06:37.276 03:57:51 -- common/autotest_common.sh@936 -- # '[' -z 143498 ']' 00:06:37.276 03:57:51 -- common/autotest_common.sh@940 -- # kill -0 143498 00:06:37.276 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (143498) - No such process 00:06:37.276 03:57:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 143498 is not found' 00:06:37.276 Process with pid 143498 is not found 00:06:37.276 03:57:51 -- event/cpu_locks.sh@18 -- # rm -f 00:06:37.276 00:06:37.276 real 0m17.114s 00:06:37.276 user 0m28.372s 00:06:37.276 sys 0m4.911s 00:06:37.276 03:57:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.276 03:57:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.276 ************************************ 00:06:37.276 END TEST cpu_locks 00:06:37.276 ************************************ 00:06:37.276 00:06:37.276 real 0m42.165s 00:06:37.276 user 1m18.165s 00:06:37.276 sys 0m8.357s 00:06:37.276 03:57:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.276 03:57:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.276 ************************************ 00:06:37.276 END TEST event 00:06:37.276 ************************************ 00:06:37.276 03:57:51 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:37.276 03:57:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:37.276 03:57:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.276 03:57:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.535 ************************************ 00:06:37.535 START TEST thread 00:06:37.535 ************************************ 00:06:37.535 03:57:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:37.535 * Looking for test storage... 00:06:37.535 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:37.535 03:57:51 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:37.535 03:57:51 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:37.535 03:57:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.535 03:57:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.535 ************************************ 00:06:37.535 START TEST thread_poller_perf 00:06:37.535 ************************************ 00:06:37.535 03:57:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:37.794 [2024-04-19 03:57:52.070112] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:37.794 [2024-04-19 03:57:52.070180] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144129 ] 00:06:37.794 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.794 [2024-04-19 03:57:52.124820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.794 [2024-04-19 03:57:52.192616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.794 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:39.173 ====================================== 00:06:39.173 busy:2707924674 (cyc) 00:06:39.173 total_run_count: 451000 00:06:39.173 tsc_hz: 2700000000 (cyc) 00:06:39.173 ====================================== 00:06:39.173 poller_cost: 6004 (cyc), 2223 (nsec) 00:06:39.173 00:06:39.173 real 0m1.235s 00:06:39.173 user 0m1.153s 00:06:39.173 sys 0m0.078s 00:06:39.173 03:57:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.173 03:57:53 -- common/autotest_common.sh@10 -- # set +x 00:06:39.173 ************************************ 00:06:39.173 END TEST thread_poller_perf 00:06:39.173 ************************************ 00:06:39.173 03:57:53 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.173 03:57:53 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:39.173 03:57:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.173 03:57:53 -- common/autotest_common.sh@10 -- # set +x 00:06:39.173 ************************************ 00:06:39.173 START TEST thread_poller_perf 00:06:39.173 ************************************ 00:06:39.173 03:57:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.173 [2024-04-19 03:57:53.454505] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:39.173 [2024-04-19 03:57:53.454543] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144413 ] 00:06:39.173 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.173 [2024-04-19 03:57:53.504068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.173 [2024-04-19 03:57:53.571196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.173 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:40.552 ====================================== 00:06:40.552 busy:2701822678 (cyc) 00:06:40.552 total_run_count: 5783000 00:06:40.552 tsc_hz: 2700000000 (cyc) 00:06:40.552 ====================================== 00:06:40.552 poller_cost: 467 (cyc), 172 (nsec) 00:06:40.552 00:06:40.552 real 0m1.207s 00:06:40.552 user 0m1.139s 00:06:40.552 sys 0m0.065s 00:06:40.552 03:57:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.552 03:57:54 -- common/autotest_common.sh@10 -- # set +x 00:06:40.552 ************************************ 00:06:40.552 END TEST thread_poller_perf 00:06:40.552 ************************************ 00:06:40.552 03:57:54 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:40.552 00:06:40.552 real 0m2.854s 00:06:40.552 user 0m2.440s 00:06:40.552 sys 0m0.385s 00:06:40.552 03:57:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.552 03:57:54 -- common/autotest_common.sh@10 -- # set +x 00:06:40.552 ************************************ 00:06:40.552 END TEST thread 00:06:40.552 ************************************ 00:06:40.552 03:57:54 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:40.552 03:57:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.552 03:57:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.552 03:57:54 -- common/autotest_common.sh@10 -- # set +x 00:06:40.552 ************************************ 00:06:40.552 START TEST accel 00:06:40.552 ************************************ 00:06:40.552 03:57:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:40.552 * Looking for test storage... 00:06:40.552 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:40.552 03:57:54 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:40.552 03:57:54 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:40.552 03:57:54 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.552 03:57:54 -- accel/accel.sh@62 -- # spdk_tgt_pid=144745 00:06:40.552 03:57:54 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:40.552 03:57:54 -- accel/accel.sh@63 -- # waitforlisten 144745 00:06:40.552 03:57:54 -- common/autotest_common.sh@817 -- # '[' -z 144745 ']' 00:06:40.552 03:57:54 -- accel/accel.sh@61 -- # build_accel_config 00:06:40.552 03:57:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.552 03:57:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:40.552 03:57:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.552 03:57:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.552 03:57:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.552 03:57:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:40.552 03:57:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.552 03:57:54 -- common/autotest_common.sh@10 -- # set +x 00:06:40.552 03:57:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.552 03:57:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.552 03:57:54 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.552 03:57:54 -- accel/accel.sh@41 -- # jq -r . 00:06:40.552 [2024-04-19 03:57:54.970661] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:40.552 [2024-04-19 03:57:54.970705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144745 ] 00:06:40.552 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.552 [2024-04-19 03:57:55.020838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.811 [2024-04-19 03:57:55.096416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.380 03:57:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:41.380 03:57:55 -- common/autotest_common.sh@850 -- # return 0 00:06:41.380 03:57:55 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:41.380 03:57:55 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:41.380 03:57:55 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:41.380 03:57:55 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:41.380 03:57:55 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:41.380 03:57:55 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:41.380 03:57:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.380 03:57:55 -- common/autotest_common.sh@10 -- # set +x 00:06:41.380 03:57:55 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:41.380 03:57:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # IFS== 00:06:41.380 03:57:55 -- accel/accel.sh@72 -- # read -r opc module 00:06:41.380 03:57:55 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.380 03:57:55 -- accel/accel.sh@75 -- # killprocess 144745 00:06:41.380 03:57:55 -- common/autotest_common.sh@936 -- # '[' -z 144745 ']' 00:06:41.380 03:57:55 -- common/autotest_common.sh@940 -- # kill -0 144745 00:06:41.380 03:57:55 -- common/autotest_common.sh@941 -- # uname 00:06:41.380 03:57:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.380 03:57:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144745 00:06:41.380 03:57:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.380 03:57:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.380 03:57:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144745' 00:06:41.380 killing process with pid 144745 00:06:41.381 03:57:55 -- common/autotest_common.sh@955 -- # kill 144745 00:06:41.381 03:57:55 -- common/autotest_common.sh@960 -- # wait 144745 00:06:41.640 03:57:56 -- accel/accel.sh@76 -- # trap - ERR 00:06:41.640 03:57:56 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:41.640 03:57:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:41.640 03:57:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.640 03:57:56 -- common/autotest_common.sh@10 -- # set +x 00:06:41.899 03:57:56 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:41.899 03:57:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:41.899 03:57:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.899 03:57:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.899 03:57:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.899 03:57:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.899 03:57:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.899 03:57:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.899 03:57:56 -- accel/accel.sh@40 -- # local IFS=, 00:06:41.899 03:57:56 -- accel/accel.sh@41 -- # jq -r . 00:06:41.899 03:57:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:41.899 03:57:56 -- common/autotest_common.sh@10 -- # set +x 00:06:41.899 03:57:56 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:41.899 03:57:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:41.899 03:57:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.899 03:57:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.158 ************************************ 00:06:42.158 START TEST accel_missing_filename 00:06:42.158 ************************************ 00:06:42.158 03:57:56 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:42.158 03:57:56 -- common/autotest_common.sh@638 -- # local es=0 00:06:42.158 03:57:56 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:42.158 03:57:56 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:42.158 03:57:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:42.158 03:57:56 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:42.158 03:57:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:42.158 03:57:56 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:42.158 03:57:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:42.158 03:57:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.159 03:57:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.159 03:57:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.159 03:57:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.159 03:57:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.159 03:57:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.159 03:57:56 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.159 03:57:56 -- accel/accel.sh@41 -- # jq -r . 00:06:42.159 [2024-04-19 03:57:56.520751] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:42.159 [2024-04-19 03:57:56.520822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145056 ] 00:06:42.159 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.159 [2024-04-19 03:57:56.578134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.159 [2024-04-19 03:57:56.653962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.418 [2024-04-19 03:57:56.694699] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.418 [2024-04-19 03:57:56.754299] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:42.418 A filename is required. 00:06:42.418 03:57:56 -- common/autotest_common.sh@641 -- # es=234 00:06:42.418 03:57:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:42.418 03:57:56 -- common/autotest_common.sh@650 -- # es=106 00:06:42.418 03:57:56 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:42.418 03:57:56 -- common/autotest_common.sh@658 -- # es=1 00:06:42.418 03:57:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:42.418 00:06:42.418 real 0m0.347s 00:06:42.418 user 0m0.259s 00:06:42.418 sys 0m0.127s 00:06:42.418 03:57:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.418 03:57:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.418 ************************************ 00:06:42.418 END TEST accel_missing_filename 00:06:42.418 ************************************ 00:06:42.418 03:57:56 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:42.418 03:57:56 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:42.418 03:57:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.418 03:57:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.678 ************************************ 00:06:42.678 START TEST accel_compress_verify 00:06:42.678 ************************************ 00:06:42.678 03:57:56 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:42.678 03:57:56 -- common/autotest_common.sh@638 -- # local es=0 00:06:42.678 03:57:56 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:42.678 03:57:56 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:42.678 03:57:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:42.678 03:57:56 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:42.678 03:57:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:42.678 03:57:56 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:42.678 03:57:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:42.678 03:57:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.678 03:57:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.678 03:57:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.678 03:57:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.678 03:57:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.678 03:57:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.678 03:57:56 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.678 03:57:56 -- accel/accel.sh@41 -- # jq -r . 00:06:42.678 [2024-04-19 03:57:57.002507] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:42.678 [2024-04-19 03:57:57.002571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145093 ] 00:06:42.678 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.678 [2024-04-19 03:57:57.056858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.678 [2024-04-19 03:57:57.123573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.678 [2024-04-19 03:57:57.163734] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.937 [2024-04-19 03:57:57.223107] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:42.937 00:06:42.937 Compression does not support the verify option, aborting. 00:06:42.937 03:57:57 -- common/autotest_common.sh@641 -- # es=161 00:06:42.937 03:57:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:42.937 03:57:57 -- common/autotest_common.sh@650 -- # es=33 00:06:42.937 03:57:57 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:42.937 03:57:57 -- common/autotest_common.sh@658 -- # es=1 00:06:42.937 03:57:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:42.937 00:06:42.937 real 0m0.330s 00:06:42.937 user 0m0.252s 00:06:42.937 sys 0m0.116s 00:06:42.937 03:57:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.937 03:57:57 -- common/autotest_common.sh@10 -- # set +x 00:06:42.937 ************************************ 00:06:42.937 END TEST accel_compress_verify 00:06:42.937 ************************************ 00:06:42.937 03:57:57 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:42.937 03:57:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:42.937 03:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.937 03:57:57 -- common/autotest_common.sh@10 -- # set +x 00:06:43.196 ************************************ 00:06:43.196 START TEST accel_wrong_workload 00:06:43.196 ************************************ 00:06:43.196 03:57:57 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:43.196 03:57:57 -- common/autotest_common.sh@638 -- # local es=0 00:06:43.196 03:57:57 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:43.196 03:57:57 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:43.196 03:57:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.196 03:57:57 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:43.196 03:57:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.196 03:57:57 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:43.196 03:57:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:43.196 03:57:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.196 03:57:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.196 03:57:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.196 03:57:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.196 03:57:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.196 03:57:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.196 03:57:57 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.196 03:57:57 -- accel/accel.sh@41 -- # jq -r . 00:06:43.196 Unsupported workload type: foobar 00:06:43.196 [2024-04-19 03:57:57.497581] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:43.196 accel_perf options: 00:06:43.196 [-h help message] 00:06:43.196 [-q queue depth per core] 00:06:43.196 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:43.196 [-T number of threads per core 00:06:43.196 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:43.196 [-t time in seconds] 00:06:43.196 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:43.196 [ dif_verify, , dif_generate, dif_generate_copy 00:06:43.196 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:43.196 [-l for compress/decompress workloads, name of uncompressed input file 00:06:43.196 [-S for crc32c workload, use this seed value (default 0) 00:06:43.196 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:43.196 [-f for fill workload, use this BYTE value (default 255) 00:06:43.196 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:43.196 [-y verify result if this switch is on] 00:06:43.196 [-a tasks to allocate per core (default: same value as -q)] 00:06:43.196 Can be used to spread operations across a wider range of memory. 00:06:43.196 03:57:57 -- common/autotest_common.sh@641 -- # es=1 00:06:43.196 03:57:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:43.196 03:57:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:43.196 03:57:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:43.196 00:06:43.196 real 0m0.031s 00:06:43.196 user 0m0.016s 00:06:43.196 sys 0m0.014s 00:06:43.196 03:57:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.196 03:57:57 -- common/autotest_common.sh@10 -- # set +x 00:06:43.196 ************************************ 00:06:43.196 END TEST accel_wrong_workload 00:06:43.196 ************************************ 00:06:43.196 Error: writing output failed: Broken pipe 00:06:43.196 03:57:57 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:43.196 03:57:57 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:43.196 03:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.196 03:57:57 -- common/autotest_common.sh@10 -- # set +x 00:06:43.196 ************************************ 00:06:43.196 START TEST accel_negative_buffers 00:06:43.196 ************************************ 00:06:43.196 03:57:57 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:43.196 03:57:57 -- common/autotest_common.sh@638 -- # local es=0 00:06:43.197 03:57:57 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:43.197 03:57:57 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:43.197 03:57:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.197 03:57:57 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:43.197 03:57:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:43.197 03:57:57 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:43.197 03:57:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:43.197 03:57:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.197 03:57:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.197 03:57:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.197 03:57:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.197 03:57:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.197 03:57:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.197 03:57:57 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.197 03:57:57 -- accel/accel.sh@41 -- # jq -r . 00:06:43.197 -x option must be non-negative. 00:06:43.197 [2024-04-19 03:57:57.689592] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:43.197 accel_perf options: 00:06:43.197 [-h help message] 00:06:43.197 [-q queue depth per core] 00:06:43.197 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:43.197 [-T number of threads per core 00:06:43.197 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:43.197 [-t time in seconds] 00:06:43.197 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:43.197 [ dif_verify, , dif_generate, dif_generate_copy 00:06:43.197 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:43.197 [-l for compress/decompress workloads, name of uncompressed input file 00:06:43.197 [-S for crc32c workload, use this seed value (default 0) 00:06:43.197 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:43.197 [-f for fill workload, use this BYTE value (default 255) 00:06:43.197 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:43.197 [-y verify result if this switch is on] 00:06:43.197 [-a tasks to allocate per core (default: same value as -q)] 00:06:43.197 Can be used to spread operations across a wider range of memory. 00:06:43.197 03:57:57 -- common/autotest_common.sh@641 -- # es=1 00:06:43.197 03:57:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:43.197 03:57:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:43.197 03:57:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:43.197 00:06:43.197 real 0m0.031s 00:06:43.197 user 0m0.021s 00:06:43.197 sys 0m0.010s 00:06:43.197 03:57:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.197 03:57:57 -- common/autotest_common.sh@10 -- # set +x 00:06:43.197 ************************************ 00:06:43.197 END TEST accel_negative_buffers 00:06:43.197 ************************************ 00:06:43.197 Error: writing output failed: Broken pipe 00:06:43.197 03:57:57 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:43.197 03:57:57 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:43.197 03:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.197 03:57:57 -- common/autotest_common.sh@10 -- # set +x 00:06:43.455 ************************************ 00:06:43.455 START TEST accel_crc32c 00:06:43.455 ************************************ 00:06:43.455 03:57:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:43.455 03:57:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.455 03:57:57 -- accel/accel.sh@17 -- # local accel_module 00:06:43.455 03:57:57 -- accel/accel.sh@19 -- # IFS=: 00:06:43.455 03:57:57 -- accel/accel.sh@19 -- # read -r var val 00:06:43.455 03:57:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:43.455 03:57:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:43.455 03:57:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.455 03:57:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.455 03:57:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.455 03:57:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.455 03:57:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.455 03:57:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.455 03:57:57 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.455 03:57:57 -- accel/accel.sh@41 -- # jq -r . 00:06:43.455 [2024-04-19 03:57:57.863739] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:43.455 [2024-04-19 03:57:57.863798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145428 ] 00:06:43.455 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.455 [2024-04-19 03:57:57.917601] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.455 [2024-04-19 03:57:57.983564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val= 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val= 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val=0x1 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val= 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val= 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val=crc32c 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val=32 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val= 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val=software 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@22 -- # accel_module=software 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val=32 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val=32 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val=1 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val=Yes 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val= 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:43.715 03:57:58 -- accel/accel.sh@20 -- # val= 00:06:43.715 03:57:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # IFS=: 00:06:43.715 03:57:58 -- accel/accel.sh@19 -- # read -r var val 00:06:44.651 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:44.651 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:44.651 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:44.651 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:44.651 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:44.651 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:44.651 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:44.651 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:44.651 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:44.651 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:44.651 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:44.651 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:44.651 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:44.651 03:57:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.651 03:57:59 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:44.651 03:57:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.651 00:06:44.651 real 0m1.331s 00:06:44.651 user 0m1.224s 00:06:44.651 sys 0m0.109s 00:06:44.651 03:57:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:44.651 03:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:44.651 ************************************ 00:06:44.651 END TEST accel_crc32c 00:06:44.651 ************************************ 00:06:44.911 03:57:59 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:44.911 03:57:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:44.911 03:57:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.911 03:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:44.911 ************************************ 00:06:44.911 START TEST accel_crc32c_C2 00:06:44.911 ************************************ 00:06:44.911 03:57:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:44.911 03:57:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.911 03:57:59 -- accel/accel.sh@17 -- # local accel_module 00:06:44.911 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:44.911 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:44.911 03:57:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:44.911 03:57:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:44.911 03:57:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.911 03:57:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.911 03:57:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.911 03:57:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.911 03:57:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.911 03:57:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.911 03:57:59 -- accel/accel.sh@40 -- # local IFS=, 00:06:44.911 03:57:59 -- accel/accel.sh@41 -- # jq -r . 00:06:44.911 [2024-04-19 03:57:59.347288] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:44.911 [2024-04-19 03:57:59.347351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145714 ] 00:06:44.911 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.911 [2024-04-19 03:57:59.402265] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.171 [2024-04-19 03:57:59.476481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val=0x1 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val=crc32c 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val=0 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val=software 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@22 -- # accel_module=software 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val=32 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val=32 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val=1 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val=Yes 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:45.171 03:57:59 -- accel/accel.sh@20 -- # val= 00:06:45.171 03:57:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # IFS=: 00:06:45.171 03:57:59 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:00 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:00 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:00 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:00 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:00 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:00 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.552 03:58:00 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:46.552 03:58:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.552 00:06:46.552 real 0m1.345s 00:06:46.552 user 0m1.230s 00:06:46.552 sys 0m0.117s 00:06:46.552 03:58:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:46.552 03:58:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.552 ************************************ 00:06:46.552 END TEST accel_crc32c_C2 00:06:46.552 ************************************ 00:06:46.552 03:58:00 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:46.552 03:58:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:46.552 03:58:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.552 03:58:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.552 ************************************ 00:06:46.552 START TEST accel_copy 00:06:46.552 ************************************ 00:06:46.552 03:58:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:46.552 03:58:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.552 03:58:00 -- accel/accel.sh@17 -- # local accel_module 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:00 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:46.552 03:58:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:46.552 03:58:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.552 03:58:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.552 03:58:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.552 03:58:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.552 03:58:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.552 03:58:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.552 03:58:00 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.552 03:58:00 -- accel/accel.sh@41 -- # jq -r . 00:06:46.552 [2024-04-19 03:58:00.844673] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:46.552 [2024-04-19 03:58:00.844725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146011 ] 00:06:46.552 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.552 [2024-04-19 03:58:00.899146] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.552 [2024-04-19 03:58:00.970466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val=0x1 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val=copy 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val=software 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@22 -- # accel_module=software 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val=32 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val=32 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val=1 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val=Yes 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.552 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:46.552 03:58:01 -- accel/accel.sh@20 -- # val= 00:06:46.552 03:58:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.553 03:58:01 -- accel/accel.sh@19 -- # IFS=: 00:06:46.553 03:58:01 -- accel/accel.sh@19 -- # read -r var val 00:06:47.931 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:47.931 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:47.931 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:47.931 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:47.931 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:47.931 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:47.931 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:47.931 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:47.931 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:47.931 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:47.931 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:47.931 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:47.931 03:58:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.931 03:58:02 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:47.931 03:58:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.931 00:06:47.931 real 0m1.339s 00:06:47.931 user 0m1.233s 00:06:47.931 sys 0m0.109s 00:06:47.931 03:58:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.931 03:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.931 ************************************ 00:06:47.931 END TEST accel_copy 00:06:47.931 ************************************ 00:06:47.931 03:58:02 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.931 03:58:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:47.931 03:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.931 03:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.931 ************************************ 00:06:47.931 START TEST accel_fill 00:06:47.931 ************************************ 00:06:47.931 03:58:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.931 03:58:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.931 03:58:02 -- accel/accel.sh@17 -- # local accel_module 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:47.931 03:58:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.931 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:47.931 03:58:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.931 03:58:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.931 03:58:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.931 03:58:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.931 03:58:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.931 03:58:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.931 03:58:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.931 03:58:02 -- accel/accel.sh@40 -- # local IFS=, 00:06:47.931 03:58:02 -- accel/accel.sh@41 -- # jq -r . 00:06:47.931 [2024-04-19 03:58:02.342653] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:47.931 [2024-04-19 03:58:02.342696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146298 ] 00:06:47.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.931 [2024-04-19 03:58:02.395152] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.191 [2024-04-19 03:58:02.463633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val=0x1 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val=fill 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val=0x80 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val=software 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@22 -- # accel_module=software 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val=64 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val=64 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val=1 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val=Yes 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:48.191 03:58:02 -- accel/accel.sh@20 -- # val= 00:06:48.191 03:58:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # IFS=: 00:06:48.191 03:58:02 -- accel/accel.sh@19 -- # read -r var val 00:06:49.129 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.129 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.129 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.129 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.129 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.129 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.129 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.129 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.129 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.129 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.129 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.129 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.129 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.129 03:58:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.129 03:58:03 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:49.129 03:58:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.129 00:06:49.129 real 0m1.333s 00:06:49.129 user 0m0.009s 00:06:49.129 sys 0m0.001s 00:06:49.129 03:58:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.129 03:58:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.129 ************************************ 00:06:49.129 END TEST accel_fill 00:06:49.129 ************************************ 00:06:49.388 03:58:03 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:49.388 03:58:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:49.388 03:58:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.388 03:58:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.388 ************************************ 00:06:49.388 START TEST accel_copy_crc32c 00:06:49.388 ************************************ 00:06:49.388 03:58:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:49.388 03:58:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.388 03:58:03 -- accel/accel.sh@17 -- # local accel_module 00:06:49.388 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.388 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.388 03:58:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:49.388 03:58:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:49.388 03:58:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.388 03:58:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.388 03:58:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.388 03:58:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.388 03:58:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.388 03:58:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.388 03:58:03 -- accel/accel.sh@40 -- # local IFS=, 00:06:49.388 03:58:03 -- accel/accel.sh@41 -- # jq -r . 00:06:49.388 [2024-04-19 03:58:03.820829] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:49.388 [2024-04-19 03:58:03.820893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146592 ] 00:06:49.388 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.388 [2024-04-19 03:58:03.874260] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.648 [2024-04-19 03:58:03.942932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val=0x1 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val=0 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val=software 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@22 -- # accel_module=software 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val=32 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val=32 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val=1 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val=Yes 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:49.648 03:58:03 -- accel/accel.sh@20 -- # val= 00:06:49.648 03:58:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # IFS=: 00:06:49.648 03:58:03 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.028 03:58:05 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:51.028 03:58:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.028 00:06:51.028 real 0m1.337s 00:06:51.028 user 0m1.230s 00:06:51.028 sys 0m0.110s 00:06:51.028 03:58:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.028 03:58:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.028 ************************************ 00:06:51.028 END TEST accel_copy_crc32c 00:06:51.028 ************************************ 00:06:51.028 03:58:05 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:51.028 03:58:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:51.028 03:58:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.028 03:58:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.028 ************************************ 00:06:51.028 START TEST accel_copy_crc32c_C2 00:06:51.028 ************************************ 00:06:51.028 03:58:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:51.028 03:58:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.028 03:58:05 -- accel/accel.sh@17 -- # local accel_module 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:51.028 03:58:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:51.028 03:58:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.028 03:58:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.028 03:58:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.028 03:58:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.028 03:58:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.028 03:58:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.028 03:58:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:51.028 03:58:05 -- accel/accel.sh@41 -- # jq -r . 00:06:51.028 [2024-04-19 03:58:05.283529] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:51.028 [2024-04-19 03:58:05.283568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146875 ] 00:06:51.028 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.028 [2024-04-19 03:58:05.332792] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.028 [2024-04-19 03:58:05.399082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val=0x1 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val=0 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val=software 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@22 -- # accel_module=software 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.028 03:58:05 -- accel/accel.sh@20 -- # val=32 00:06:51.028 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.028 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.029 03:58:05 -- accel/accel.sh@20 -- # val=32 00:06:51.029 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.029 03:58:05 -- accel/accel.sh@20 -- # val=1 00:06:51.029 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.029 03:58:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.029 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.029 03:58:05 -- accel/accel.sh@20 -- # val=Yes 00:06:51.029 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.029 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.029 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:51.029 03:58:05 -- accel/accel.sh@20 -- # val= 00:06:51.029 03:58:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # IFS=: 00:06:51.029 03:58:05 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.409 03:58:06 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:52.409 03:58:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.409 00:06:52.409 real 0m1.315s 00:06:52.409 user 0m1.221s 00:06:52.409 sys 0m0.098s 00:06:52.409 03:58:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.409 03:58:06 -- common/autotest_common.sh@10 -- # set +x 00:06:52.409 ************************************ 00:06:52.409 END TEST accel_copy_crc32c_C2 00:06:52.409 ************************************ 00:06:52.409 03:58:06 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:52.409 03:58:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:52.409 03:58:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.409 03:58:06 -- common/autotest_common.sh@10 -- # set +x 00:06:52.409 ************************************ 00:06:52.409 START TEST accel_dualcast 00:06:52.409 ************************************ 00:06:52.409 03:58:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:52.409 03:58:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.409 03:58:06 -- accel/accel.sh@17 -- # local accel_module 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:52.409 03:58:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:52.409 03:58:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.409 03:58:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.409 03:58:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.409 03:58:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.409 03:58:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.409 03:58:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.409 03:58:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:52.409 03:58:06 -- accel/accel.sh@41 -- # jq -r . 00:06:52.409 [2024-04-19 03:58:06.731434] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:52.409 [2024-04-19 03:58:06.731476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147168 ] 00:06:52.409 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.409 [2024-04-19 03:58:06.780010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.409 [2024-04-19 03:58:06.845638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val=0x1 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val=dualcast 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val=software 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@22 -- # accel_module=software 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val=32 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val=32 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val=1 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val=Yes 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:52.409 03:58:06 -- accel/accel.sh@20 -- # val= 00:06:52.409 03:58:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # IFS=: 00:06:52.409 03:58:06 -- accel/accel.sh@19 -- # read -r var val 00:06:53.787 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:53.787 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:53.787 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:53.787 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:53.787 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:53.787 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:53.787 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:53.787 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:53.787 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:53.787 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:53.787 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:53.787 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:53.787 03:58:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.787 03:58:08 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:53.787 03:58:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.787 00:06:53.787 real 0m1.314s 00:06:53.787 user 0m1.215s 00:06:53.787 sys 0m0.103s 00:06:53.787 03:58:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:53.787 03:58:08 -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 ************************************ 00:06:53.787 END TEST accel_dualcast 00:06:53.787 ************************************ 00:06:53.787 03:58:08 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:53.787 03:58:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:53.787 03:58:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.787 03:58:08 -- common/autotest_common.sh@10 -- # set +x 00:06:53.787 ************************************ 00:06:53.787 START TEST accel_compare 00:06:53.787 ************************************ 00:06:53.787 03:58:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:53.787 03:58:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.787 03:58:08 -- accel/accel.sh@17 -- # local accel_module 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:53.787 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:53.787 03:58:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:53.787 03:58:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:53.787 03:58:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.787 03:58:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.787 03:58:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.787 03:58:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.787 03:58:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.787 03:58:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.788 03:58:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:53.788 03:58:08 -- accel/accel.sh@41 -- # jq -r . 00:06:53.788 [2024-04-19 03:58:08.177524] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:53.788 [2024-04-19 03:58:08.177564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147452 ] 00:06:53.788 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.788 [2024-04-19 03:58:08.225747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.788 [2024-04-19 03:58:08.291497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.047 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:54.047 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:54.047 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 03:58:08 -- accel/accel.sh@20 -- # val=0x1 00:06:54.047 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:54.047 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:54.047 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 03:58:08 -- accel/accel.sh@20 -- # val=compare 00:06:54.047 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 03:58:08 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 03:58:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.047 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:54.047 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.047 03:58:08 -- accel/accel.sh@20 -- # val=software 00:06:54.047 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.047 03:58:08 -- accel/accel.sh@22 -- # accel_module=software 00:06:54.047 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.048 03:58:08 -- accel/accel.sh@20 -- # val=32 00:06:54.048 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.048 03:58:08 -- accel/accel.sh@20 -- # val=32 00:06:54.048 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.048 03:58:08 -- accel/accel.sh@20 -- # val=1 00:06:54.048 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.048 03:58:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.048 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.048 03:58:08 -- accel/accel.sh@20 -- # val=Yes 00:06:54.048 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.048 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:54.048 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.048 03:58:08 -- accel/accel.sh@20 -- # val= 00:06:54.048 03:58:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # IFS=: 00:06:54.048 03:58:08 -- accel/accel.sh@19 -- # read -r var val 00:06:54.986 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:54.986 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:54.986 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:54.986 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:54.986 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:54.986 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:54.986 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:54.986 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:54.986 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:54.986 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:54.986 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:54.986 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:54.986 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:54.986 03:58:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.986 03:58:09 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:54.986 03:58:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.986 00:06:54.986 real 0m1.314s 00:06:54.986 user 0m1.220s 00:06:54.986 sys 0m0.097s 00:06:54.986 03:58:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:54.986 03:58:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.986 ************************************ 00:06:54.986 END TEST accel_compare 00:06:54.986 ************************************ 00:06:54.986 03:58:09 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:54.986 03:58:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:54.986 03:58:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.986 03:58:09 -- common/autotest_common.sh@10 -- # set +x 00:06:55.245 ************************************ 00:06:55.245 START TEST accel_xor 00:06:55.245 ************************************ 00:06:55.245 03:58:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:55.245 03:58:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.245 03:58:09 -- accel/accel.sh@17 -- # local accel_module 00:06:55.245 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.245 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.245 03:58:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:55.245 03:58:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:55.245 03:58:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.245 03:58:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.245 03:58:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.245 03:58:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.245 03:58:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.245 03:58:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.245 03:58:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:55.245 03:58:09 -- accel/accel.sh@41 -- # jq -r . 00:06:55.245 [2024-04-19 03:58:09.654206] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:55.245 [2024-04-19 03:58:09.654269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147738 ] 00:06:55.245 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.245 [2024-04-19 03:58:09.708298] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.505 [2024-04-19 03:58:09.780449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val=0x1 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val=xor 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val=2 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val=software 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@22 -- # accel_module=software 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val=32 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val=32 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val=1 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val=Yes 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:55.505 03:58:09 -- accel/accel.sh@20 -- # val= 00:06:55.505 03:58:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # IFS=: 00:06:55.505 03:58:09 -- accel/accel.sh@19 -- # read -r var val 00:06:56.443 03:58:10 -- accel/accel.sh@20 -- # val= 00:06:56.443 03:58:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # IFS=: 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # read -r var val 00:06:56.443 03:58:10 -- accel/accel.sh@20 -- # val= 00:06:56.443 03:58:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # IFS=: 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # read -r var val 00:06:56.443 03:58:10 -- accel/accel.sh@20 -- # val= 00:06:56.443 03:58:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # IFS=: 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # read -r var val 00:06:56.443 03:58:10 -- accel/accel.sh@20 -- # val= 00:06:56.443 03:58:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # IFS=: 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # read -r var val 00:06:56.443 03:58:10 -- accel/accel.sh@20 -- # val= 00:06:56.443 03:58:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # IFS=: 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # read -r var val 00:06:56.443 03:58:10 -- accel/accel.sh@20 -- # val= 00:06:56.443 03:58:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # IFS=: 00:06:56.443 03:58:10 -- accel/accel.sh@19 -- # read -r var val 00:06:56.443 03:58:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.443 03:58:10 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:56.443 03:58:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.443 00:06:56.443 real 0m1.342s 00:06:56.443 user 0m1.234s 00:06:56.443 sys 0m0.109s 00:06:56.443 03:58:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:56.443 03:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:56.443 ************************************ 00:06:56.443 END TEST accel_xor 00:06:56.443 ************************************ 00:06:56.703 03:58:11 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:56.703 03:58:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:56.703 03:58:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.703 03:58:11 -- common/autotest_common.sh@10 -- # set +x 00:06:56.703 ************************************ 00:06:56.703 START TEST accel_xor 00:06:56.703 ************************************ 00:06:56.703 03:58:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:56.703 03:58:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.703 03:58:11 -- accel/accel.sh@17 -- # local accel_module 00:06:56.703 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.703 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.703 03:58:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:56.703 03:58:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:56.703 03:58:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.703 03:58:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.703 03:58:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.703 03:58:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.703 03:58:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.703 03:58:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.703 03:58:11 -- accel/accel.sh@40 -- # local IFS=, 00:06:56.703 03:58:11 -- accel/accel.sh@41 -- # jq -r . 00:06:56.703 [2024-04-19 03:58:11.148881] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:56.703 [2024-04-19 03:58:11.148934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148032 ] 00:06:56.703 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.703 [2024-04-19 03:58:11.204988] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.963 [2024-04-19 03:58:11.282601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val= 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val= 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val=0x1 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val= 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val= 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val=xor 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val=3 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val= 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val=software 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@22 -- # accel_module=software 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val=32 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val=32 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val=1 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val=Yes 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val= 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:56.963 03:58:11 -- accel/accel.sh@20 -- # val= 00:06:56.963 03:58:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # IFS=: 00:06:56.963 03:58:11 -- accel/accel.sh@19 -- # read -r var val 00:06:58.348 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.348 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.348 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.348 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.348 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.348 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.348 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.348 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.348 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.348 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.348 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.348 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.348 03:58:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.348 03:58:12 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:58.348 03:58:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.348 00:06:58.348 real 0m1.350s 00:06:58.348 user 0m1.231s 00:06:58.348 sys 0m0.122s 00:06:58.348 03:58:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.348 03:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.348 ************************************ 00:06:58.348 END TEST accel_xor 00:06:58.348 ************************************ 00:06:58.348 03:58:12 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:58.348 03:58:12 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:58.348 03:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.348 03:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.348 ************************************ 00:06:58.348 START TEST accel_dif_verify 00:06:58.348 ************************************ 00:06:58.348 03:58:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:58.348 03:58:12 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.348 03:58:12 -- accel/accel.sh@17 -- # local accel_module 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.348 03:58:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:58.348 03:58:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:58.348 03:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.348 03:58:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.348 03:58:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.348 03:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.348 03:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.348 03:58:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.348 03:58:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.348 03:58:12 -- accel/accel.sh@41 -- # jq -r . 00:06:58.348 [2024-04-19 03:58:12.633266] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:58.348 [2024-04-19 03:58:12.633311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148321 ] 00:06:58.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.348 [2024-04-19 03:58:12.685075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.348 [2024-04-19 03:58:12.750561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.348 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.348 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.348 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.348 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.348 03:58:12 -- accel/accel.sh@20 -- # val=0x1 00:06:58.348 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.348 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val=dif_verify 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val=software 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val=32 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val=32 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val=1 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val=No 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:58.349 03:58:12 -- accel/accel.sh@20 -- # val= 00:06:58.349 03:58:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # IFS=: 00:06:58.349 03:58:12 -- accel/accel.sh@19 -- # read -r var val 00:06:59.727 03:58:13 -- accel/accel.sh@20 -- # val= 00:06:59.727 03:58:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # IFS=: 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # read -r var val 00:06:59.727 03:58:13 -- accel/accel.sh@20 -- # val= 00:06:59.727 03:58:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # IFS=: 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # read -r var val 00:06:59.727 03:58:13 -- accel/accel.sh@20 -- # val= 00:06:59.727 03:58:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # IFS=: 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # read -r var val 00:06:59.727 03:58:13 -- accel/accel.sh@20 -- # val= 00:06:59.727 03:58:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # IFS=: 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # read -r var val 00:06:59.727 03:58:13 -- accel/accel.sh@20 -- # val= 00:06:59.727 03:58:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # IFS=: 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # read -r var val 00:06:59.727 03:58:13 -- accel/accel.sh@20 -- # val= 00:06:59.727 03:58:13 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # IFS=: 00:06:59.727 03:58:13 -- accel/accel.sh@19 -- # read -r var val 00:06:59.727 03:58:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.727 03:58:13 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:59.727 03:58:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.727 00:06:59.727 real 0m1.330s 00:06:59.727 user 0m1.225s 00:06:59.727 sys 0m0.109s 00:06:59.727 03:58:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:59.727 03:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:59.727 ************************************ 00:06:59.727 END TEST accel_dif_verify 00:06:59.727 ************************************ 00:06:59.727 03:58:13 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:59.727 03:58:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:59.727 03:58:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.727 03:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:59.727 ************************************ 00:06:59.727 START TEST accel_dif_generate 00:06:59.727 ************************************ 00:06:59.727 03:58:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:59.727 03:58:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.727 03:58:14 -- accel/accel.sh@17 -- # local accel_module 00:06:59.727 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.727 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.727 03:58:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:59.727 03:58:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:59.727 03:58:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.727 03:58:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.727 03:58:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.727 03:58:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.727 03:58:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.727 03:58:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.727 03:58:14 -- accel/accel.sh@40 -- # local IFS=, 00:06:59.727 03:58:14 -- accel/accel.sh@41 -- # jq -r . 00:06:59.727 [2024-04-19 03:58:14.092283] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:59.727 [2024-04-19 03:58:14.092331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148610 ] 00:06:59.727 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.727 [2024-04-19 03:58:14.143232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.727 [2024-04-19 03:58:14.214214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.727 03:58:14 -- accel/accel.sh@20 -- # val= 00:06:59.727 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.727 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.727 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.727 03:58:14 -- accel/accel.sh@20 -- # val= 00:06:59.727 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.727 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.727 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.727 03:58:14 -- accel/accel.sh@20 -- # val=0x1 00:06:59.727 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.727 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.727 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.727 03:58:14 -- accel/accel.sh@20 -- # val= 00:06:59.727 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.727 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val= 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val=dif_generate 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val= 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val=software 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@22 -- # accel_module=software 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val=32 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val=32 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val=1 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val=No 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val= 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:06:59.987 03:58:14 -- accel/accel.sh@20 -- # val= 00:06:59.987 03:58:14 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # IFS=: 00:06:59.987 03:58:14 -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:00.925 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:00.925 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:00.925 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:00.925 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:00.925 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:00.925 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 03:58:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.925 03:58:15 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:00.925 03:58:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.925 00:07:00.925 real 0m1.333s 00:07:00.925 user 0m1.223s 00:07:00.925 sys 0m0.113s 00:07:00.925 03:58:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.925 03:58:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.925 ************************************ 00:07:00.925 END TEST accel_dif_generate 00:07:00.925 ************************************ 00:07:00.925 03:58:15 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:00.925 03:58:15 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:00.925 03:58:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.925 03:58:15 -- common/autotest_common.sh@10 -- # set +x 00:07:01.185 ************************************ 00:07:01.185 START TEST accel_dif_generate_copy 00:07:01.185 ************************************ 00:07:01.185 03:58:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:07:01.185 03:58:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.185 03:58:15 -- accel/accel.sh@17 -- # local accel_module 00:07:01.185 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.185 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.185 03:58:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:01.185 03:58:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:01.185 03:58:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.185 03:58:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.185 03:58:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.185 03:58:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.185 03:58:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.185 03:58:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.185 03:58:15 -- accel/accel.sh@40 -- # local IFS=, 00:07:01.185 03:58:15 -- accel/accel.sh@41 -- # jq -r . 00:07:01.185 [2024-04-19 03:58:15.562809] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:01.185 [2024-04-19 03:58:15.562851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148899 ] 00:07:01.185 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.185 [2024-04-19 03:58:15.612013] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.185 [2024-04-19 03:58:15.678348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val=0x1 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val=software 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@22 -- # accel_module=software 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val=32 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val=32 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val=1 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val=No 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:01.445 03:58:15 -- accel/accel.sh@20 -- # val= 00:07:01.445 03:58:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # IFS=: 00:07:01.445 03:58:15 -- accel/accel.sh@19 -- # read -r var val 00:07:02.383 03:58:16 -- accel/accel.sh@20 -- # val= 00:07:02.383 03:58:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # IFS=: 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # read -r var val 00:07:02.383 03:58:16 -- accel/accel.sh@20 -- # val= 00:07:02.383 03:58:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # IFS=: 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # read -r var val 00:07:02.383 03:58:16 -- accel/accel.sh@20 -- # val= 00:07:02.383 03:58:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # IFS=: 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # read -r var val 00:07:02.383 03:58:16 -- accel/accel.sh@20 -- # val= 00:07:02.383 03:58:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # IFS=: 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # read -r var val 00:07:02.383 03:58:16 -- accel/accel.sh@20 -- # val= 00:07:02.383 03:58:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # IFS=: 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # read -r var val 00:07:02.383 03:58:16 -- accel/accel.sh@20 -- # val= 00:07:02.383 03:58:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # IFS=: 00:07:02.383 03:58:16 -- accel/accel.sh@19 -- # read -r var val 00:07:02.383 03:58:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.383 03:58:16 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:02.383 03:58:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.383 00:07:02.383 real 0m1.314s 00:07:02.383 user 0m1.220s 00:07:02.383 sys 0m0.098s 00:07:02.383 03:58:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.383 03:58:16 -- common/autotest_common.sh@10 -- # set +x 00:07:02.383 ************************************ 00:07:02.383 END TEST accel_dif_generate_copy 00:07:02.383 ************************************ 00:07:02.383 03:58:16 -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:02.383 03:58:16 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:02.383 03:58:16 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:02.383 03:58:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.383 03:58:16 -- common/autotest_common.sh@10 -- # set +x 00:07:02.643 ************************************ 00:07:02.643 START TEST accel_comp 00:07:02.643 ************************************ 00:07:02.643 03:58:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:02.643 03:58:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.643 03:58:17 -- accel/accel.sh@17 -- # local accel_module 00:07:02.643 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.643 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.643 03:58:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:02.643 03:58:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:02.643 03:58:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.643 03:58:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.643 03:58:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.643 03:58:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.643 03:58:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.643 03:58:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.643 03:58:17 -- accel/accel.sh@40 -- # local IFS=, 00:07:02.643 03:58:17 -- accel/accel.sh@41 -- # jq -r . 00:07:02.643 [2024-04-19 03:58:17.035584] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:02.643 [2024-04-19 03:58:17.035636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149189 ] 00:07:02.643 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.643 [2024-04-19 03:58:17.088309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.643 [2024-04-19 03:58:17.158325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.902 03:58:17 -- accel/accel.sh@20 -- # val= 00:07:02.902 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.902 03:58:17 -- accel/accel.sh@20 -- # val= 00:07:02.902 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.902 03:58:17 -- accel/accel.sh@20 -- # val= 00:07:02.902 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.902 03:58:17 -- accel/accel.sh@20 -- # val=0x1 00:07:02.902 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.902 03:58:17 -- accel/accel.sh@20 -- # val= 00:07:02.902 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.902 03:58:17 -- accel/accel.sh@20 -- # val= 00:07:02.902 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.902 03:58:17 -- accel/accel.sh@20 -- # val=compress 00:07:02.902 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.902 03:58:17 -- accel/accel.sh@23 -- # accel_opc=compress 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.902 03:58:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.902 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.902 03:58:17 -- accel/accel.sh@20 -- # val= 00:07:02.902 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.902 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.902 03:58:17 -- accel/accel.sh@20 -- # val=software 00:07:02.902 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.903 03:58:17 -- accel/accel.sh@22 -- # accel_module=software 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.903 03:58:17 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:02.903 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.903 03:58:17 -- accel/accel.sh@20 -- # val=32 00:07:02.903 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.903 03:58:17 -- accel/accel.sh@20 -- # val=32 00:07:02.903 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.903 03:58:17 -- accel/accel.sh@20 -- # val=1 00:07:02.903 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.903 03:58:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.903 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.903 03:58:17 -- accel/accel.sh@20 -- # val=No 00:07:02.903 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.903 03:58:17 -- accel/accel.sh@20 -- # val= 00:07:02.903 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:02.903 03:58:17 -- accel/accel.sh@20 -- # val= 00:07:02.903 03:58:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # IFS=: 00:07:02.903 03:58:17 -- accel/accel.sh@19 -- # read -r var val 00:07:03.841 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:03.841 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:03.841 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:03.841 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:03.841 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:03.841 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:03.841 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:03.841 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:03.841 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:03.841 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:03.841 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:03.841 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:03.841 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:03.841 03:58:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.841 03:58:18 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:03.841 03:58:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.841 00:07:03.841 real 0m1.338s 00:07:03.841 user 0m1.225s 00:07:03.841 sys 0m0.116s 00:07:03.841 03:58:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.841 03:58:18 -- common/autotest_common.sh@10 -- # set +x 00:07:03.841 ************************************ 00:07:03.841 END TEST accel_comp 00:07:03.841 ************************************ 00:07:04.101 03:58:18 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.101 03:58:18 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:04.101 03:58:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.101 03:58:18 -- common/autotest_common.sh@10 -- # set +x 00:07:04.101 ************************************ 00:07:04.101 START TEST accel_decomp 00:07:04.101 ************************************ 00:07:04.101 03:58:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.101 03:58:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.101 03:58:18 -- accel/accel.sh@17 -- # local accel_module 00:07:04.101 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.101 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.101 03:58:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.101 03:58:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.101 03:58:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.101 03:58:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.101 03:58:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.101 03:58:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.101 03:58:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.101 03:58:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.101 03:58:18 -- accel/accel.sh@40 -- # local IFS=, 00:07:04.101 03:58:18 -- accel/accel.sh@41 -- # jq -r . 00:07:04.101 [2024-04-19 03:58:18.523370] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:04.101 [2024-04-19 03:58:18.523449] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149478 ] 00:07:04.101 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.101 [2024-04-19 03:58:18.579095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.361 [2024-04-19 03:58:18.651837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val=0x1 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val=decompress 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val=software 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@22 -- # accel_module=software 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val=32 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val=32 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val=1 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val=Yes 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:04.361 03:58:18 -- accel/accel.sh@20 -- # val= 00:07:04.361 03:58:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # IFS=: 00:07:04.361 03:58:18 -- accel/accel.sh@19 -- # read -r var val 00:07:05.738 03:58:19 -- accel/accel.sh@20 -- # val= 00:07:05.738 03:58:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # IFS=: 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # read -r var val 00:07:05.738 03:58:19 -- accel/accel.sh@20 -- # val= 00:07:05.738 03:58:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # IFS=: 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # read -r var val 00:07:05.738 03:58:19 -- accel/accel.sh@20 -- # val= 00:07:05.738 03:58:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # IFS=: 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # read -r var val 00:07:05.738 03:58:19 -- accel/accel.sh@20 -- # val= 00:07:05.738 03:58:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # IFS=: 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # read -r var val 00:07:05.738 03:58:19 -- accel/accel.sh@20 -- # val= 00:07:05.738 03:58:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # IFS=: 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # read -r var val 00:07:05.738 03:58:19 -- accel/accel.sh@20 -- # val= 00:07:05.738 03:58:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # IFS=: 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # read -r var val 00:07:05.738 03:58:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.738 03:58:19 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.738 03:58:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.738 00:07:05.738 real 0m1.346s 00:07:05.738 user 0m1.236s 00:07:05.738 sys 0m0.115s 00:07:05.738 03:58:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.738 03:58:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.738 ************************************ 00:07:05.738 END TEST accel_decomp 00:07:05.738 ************************************ 00:07:05.738 03:58:19 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.738 03:58:19 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:05.738 03:58:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.738 03:58:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.738 ************************************ 00:07:05.738 START TEST accel_decmop_full 00:07:05.738 ************************************ 00:07:05.738 03:58:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.738 03:58:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.738 03:58:19 -- accel/accel.sh@17 -- # local accel_module 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # IFS=: 00:07:05.738 03:58:19 -- accel/accel.sh@19 -- # read -r var val 00:07:05.738 03:58:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.738 03:58:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.738 03:58:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.738 03:58:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.738 03:58:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.738 03:58:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.738 03:58:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.738 03:58:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.738 03:58:19 -- accel/accel.sh@40 -- # local IFS=, 00:07:05.738 03:58:19 -- accel/accel.sh@41 -- # jq -r . 00:07:05.738 [2024-04-19 03:58:20.017715] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:05.738 [2024-04-19 03:58:20.017783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149768 ] 00:07:05.738 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.738 [2024-04-19 03:58:20.074294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.738 [2024-04-19 03:58:20.148119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.738 03:58:20 -- accel/accel.sh@20 -- # val= 00:07:05.738 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.738 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.738 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val= 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val= 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val=0x1 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val= 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val= 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val=decompress 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val= 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val=software 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@22 -- # accel_module=software 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val=32 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val=32 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val=1 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val=Yes 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val= 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:05.739 03:58:20 -- accel/accel.sh@20 -- # val= 00:07:05.739 03:58:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # IFS=: 00:07:05.739 03:58:20 -- accel/accel.sh@19 -- # read -r var val 00:07:07.117 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.117 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.117 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.117 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.117 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.117 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.117 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.117 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.117 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.117 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.117 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.117 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.117 03:58:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.117 03:58:21 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:07.117 03:58:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.117 00:07:07.117 real 0m1.355s 00:07:07.117 user 0m1.247s 00:07:07.117 sys 0m0.111s 00:07:07.117 03:58:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.117 03:58:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.117 ************************************ 00:07:07.117 END TEST accel_decmop_full 00:07:07.117 ************************************ 00:07:07.117 03:58:21 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:07.117 03:58:21 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:07.117 03:58:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.117 03:58:21 -- common/autotest_common.sh@10 -- # set +x 00:07:07.117 ************************************ 00:07:07.117 START TEST accel_decomp_mcore 00:07:07.117 ************************************ 00:07:07.117 03:58:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:07.117 03:58:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.117 03:58:21 -- accel/accel.sh@17 -- # local accel_module 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.117 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.117 03:58:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:07.117 03:58:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:07.117 03:58:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.117 03:58:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.117 03:58:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.117 03:58:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.117 03:58:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.117 03:58:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.117 03:58:21 -- accel/accel.sh@40 -- # local IFS=, 00:07:07.117 03:58:21 -- accel/accel.sh@41 -- # jq -r . 00:07:07.117 [2024-04-19 03:58:21.525117] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:07.117 [2024-04-19 03:58:21.525178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150053 ] 00:07:07.117 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.117 [2024-04-19 03:58:21.582030] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.376 [2024-04-19 03:58:21.656788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.376 [2024-04-19 03:58:21.656804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.376 [2024-04-19 03:58:21.656889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.376 [2024-04-19 03:58:21.656891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val=0xf 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val=decompress 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val=software 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@22 -- # accel_module=software 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val=32 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val=32 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val=1 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val=Yes 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:07.376 03:58:21 -- accel/accel.sh@20 -- # val= 00:07:07.376 03:58:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # IFS=: 00:07:07.376 03:58:21 -- accel/accel.sh@19 -- # read -r var val 00:07:08.754 03:58:22 -- accel/accel.sh@20 -- # val= 00:07:08.754 03:58:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # IFS=: 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # read -r var val 00:07:08.754 03:58:22 -- accel/accel.sh@20 -- # val= 00:07:08.754 03:58:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # IFS=: 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # read -r var val 00:07:08.754 03:58:22 -- accel/accel.sh@20 -- # val= 00:07:08.754 03:58:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # IFS=: 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # read -r var val 00:07:08.754 03:58:22 -- accel/accel.sh@20 -- # val= 00:07:08.754 03:58:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # IFS=: 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # read -r var val 00:07:08.754 03:58:22 -- accel/accel.sh@20 -- # val= 00:07:08.754 03:58:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # IFS=: 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # read -r var val 00:07:08.754 03:58:22 -- accel/accel.sh@20 -- # val= 00:07:08.754 03:58:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # IFS=: 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # read -r var val 00:07:08.754 03:58:22 -- accel/accel.sh@20 -- # val= 00:07:08.754 03:58:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # IFS=: 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # read -r var val 00:07:08.754 03:58:22 -- accel/accel.sh@20 -- # val= 00:07:08.754 03:58:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # IFS=: 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # read -r var val 00:07:08.754 03:58:22 -- accel/accel.sh@20 -- # val= 00:07:08.754 03:58:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # IFS=: 00:07:08.754 03:58:22 -- accel/accel.sh@19 -- # read -r var val 00:07:08.754 03:58:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.754 03:58:22 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.754 03:58:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.754 00:07:08.754 real 0m1.363s 00:07:08.754 user 0m4.574s 00:07:08.754 sys 0m0.129s 00:07:08.754 03:58:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.754 03:58:22 -- common/autotest_common.sh@10 -- # set +x 00:07:08.754 ************************************ 00:07:08.754 END TEST accel_decomp_mcore 00:07:08.754 ************************************ 00:07:08.755 03:58:22 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.755 03:58:22 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:08.755 03:58:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.755 03:58:22 -- common/autotest_common.sh@10 -- # set +x 00:07:08.755 ************************************ 00:07:08.755 START TEST accel_decomp_full_mcore 00:07:08.755 ************************************ 00:07:08.755 03:58:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.755 03:58:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.755 03:58:23 -- accel/accel.sh@17 -- # local accel_module 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.755 03:58:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.755 03:58:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.755 03:58:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.755 03:58:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.755 03:58:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.755 03:58:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.755 03:58:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.755 03:58:23 -- accel/accel.sh@40 -- # local IFS=, 00:07:08.755 03:58:23 -- accel/accel.sh@41 -- # jq -r . 00:07:08.755 [2024-04-19 03:58:23.059393] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:08.755 [2024-04-19 03:58:23.059468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150348 ] 00:07:08.755 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.755 [2024-04-19 03:58:23.113102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.755 [2024-04-19 03:58:23.188997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.755 [2024-04-19 03:58:23.189091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.755 [2024-04-19 03:58:23.189184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.755 [2024-04-19 03:58:23.189186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val= 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val= 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val= 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val=0xf 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val= 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val= 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val=decompress 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val= 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val=software 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@22 -- # accel_module=software 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val=32 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val=32 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val=1 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val=Yes 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val= 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:08.755 03:58:23 -- accel/accel.sh@20 -- # val= 00:07:08.755 03:58:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # IFS=: 00:07:08.755 03:58:23 -- accel/accel.sh@19 -- # read -r var val 00:07:10.134 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.134 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.134 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.134 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.134 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.134 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.134 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.134 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.134 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.134 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.134 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.134 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.134 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.134 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.134 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.134 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.134 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.134 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.134 03:58:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.134 03:58:24 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:10.134 03:58:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.134 00:07:10.134 real 0m1.369s 00:07:10.134 user 0m4.607s 00:07:10.134 sys 0m0.124s 00:07:10.134 03:58:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.134 03:58:24 -- common/autotest_common.sh@10 -- # set +x 00:07:10.134 ************************************ 00:07:10.134 END TEST accel_decomp_full_mcore 00:07:10.134 ************************************ 00:07:10.134 03:58:24 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:10.134 03:58:24 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:10.134 03:58:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.134 03:58:24 -- common/autotest_common.sh@10 -- # set +x 00:07:10.134 ************************************ 00:07:10.134 START TEST accel_decomp_mthread 00:07:10.134 ************************************ 00:07:10.134 03:58:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:10.134 03:58:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.134 03:58:24 -- accel/accel.sh@17 -- # local accel_module 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.134 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.134 03:58:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:10.134 03:58:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:10.134 03:58:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.134 03:58:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.134 03:58:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.134 03:58:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.134 03:58:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.134 03:58:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.134 03:58:24 -- accel/accel.sh@40 -- # local IFS=, 00:07:10.134 03:58:24 -- accel/accel.sh@41 -- # jq -r . 00:07:10.134 [2024-04-19 03:58:24.596882] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:10.134 [2024-04-19 03:58:24.596945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150644 ] 00:07:10.134 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.134 [2024-04-19 03:58:24.649856] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.393 [2024-04-19 03:58:24.718054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val=0x1 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val=decompress 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val=software 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@22 -- # accel_module=software 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val=32 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val=32 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val=2 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val=Yes 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:10.394 03:58:24 -- accel/accel.sh@20 -- # val= 00:07:10.394 03:58:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # IFS=: 00:07:10.394 03:58:24 -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 03:58:25 -- accel/accel.sh@20 -- # val= 00:07:11.773 03:58:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 03:58:25 -- accel/accel.sh@20 -- # val= 00:07:11.773 03:58:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 03:58:25 -- accel/accel.sh@20 -- # val= 00:07:11.773 03:58:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 03:58:25 -- accel/accel.sh@20 -- # val= 00:07:11.773 03:58:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 03:58:25 -- accel/accel.sh@20 -- # val= 00:07:11.773 03:58:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 03:58:25 -- accel/accel.sh@20 -- # val= 00:07:11.773 03:58:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 03:58:25 -- accel/accel.sh@20 -- # val= 00:07:11.773 03:58:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 03:58:25 -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 03:58:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.773 03:58:25 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.773 03:58:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.773 00:07:11.773 real 0m1.341s 00:07:11.773 user 0m1.240s 00:07:11.773 sys 0m0.113s 00:07:11.773 03:58:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:11.773 03:58:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.773 ************************************ 00:07:11.773 END TEST accel_decomp_mthread 00:07:11.773 ************************************ 00:07:11.773 03:58:25 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.773 03:58:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:11.773 03:58:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.773 03:58:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.773 ************************************ 00:07:11.773 START TEST accel_deomp_full_mthread 00:07:11.773 ************************************ 00:07:11.773 03:58:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.773 03:58:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.773 03:58:26 -- accel/accel.sh@17 -- # local accel_module 00:07:11.773 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 03:58:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.773 03:58:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.773 03:58:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.773 03:58:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.773 03:58:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.773 03:58:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.773 03:58:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.773 03:58:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.773 03:58:26 -- accel/accel.sh@40 -- # local IFS=, 00:07:11.773 03:58:26 -- accel/accel.sh@41 -- # jq -r . 00:07:11.773 [2024-04-19 03:58:26.085301] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:11.773 [2024-04-19 03:58:26.085346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150954 ] 00:07:11.773 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.773 [2024-04-19 03:58:26.136396] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.773 [2024-04-19 03:58:26.203334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.773 03:58:26 -- accel/accel.sh@20 -- # val= 00:07:11.773 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val= 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val= 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val=0x1 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val= 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val= 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val=decompress 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val= 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val=software 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@22 -- # accel_module=software 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val=32 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val=32 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val=2 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val=Yes 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val= 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:11.774 03:58:26 -- accel/accel.sh@20 -- # val= 00:07:11.774 03:58:26 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # IFS=: 00:07:11.774 03:58:26 -- accel/accel.sh@19 -- # read -r var val 00:07:13.154 03:58:27 -- accel/accel.sh@20 -- # val= 00:07:13.154 03:58:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # IFS=: 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # read -r var val 00:07:13.154 03:58:27 -- accel/accel.sh@20 -- # val= 00:07:13.154 03:58:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # IFS=: 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # read -r var val 00:07:13.154 03:58:27 -- accel/accel.sh@20 -- # val= 00:07:13.154 03:58:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # IFS=: 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # read -r var val 00:07:13.154 03:58:27 -- accel/accel.sh@20 -- # val= 00:07:13.154 03:58:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # IFS=: 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # read -r var val 00:07:13.154 03:58:27 -- accel/accel.sh@20 -- # val= 00:07:13.154 03:58:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # IFS=: 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # read -r var val 00:07:13.154 03:58:27 -- accel/accel.sh@20 -- # val= 00:07:13.154 03:58:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # IFS=: 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # read -r var val 00:07:13.154 03:58:27 -- accel/accel.sh@20 -- # val= 00:07:13.154 03:58:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # IFS=: 00:07:13.154 03:58:27 -- accel/accel.sh@19 -- # read -r var val 00:07:13.154 03:58:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.154 03:58:27 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.154 03:58:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.154 00:07:13.154 real 0m1.353s 00:07:13.154 user 0m1.243s 00:07:13.154 sys 0m0.112s 00:07:13.154 03:58:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:13.154 03:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.154 ************************************ 00:07:13.154 END TEST accel_deomp_full_mthread 00:07:13.154 ************************************ 00:07:13.154 03:58:27 -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:13.154 03:58:27 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:13.154 03:58:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:13.154 03:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.154 03:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.154 03:58:27 -- accel/accel.sh@137 -- # build_accel_config 00:07:13.154 03:58:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.154 03:58:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.154 03:58:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.154 03:58:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.154 03:58:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.154 03:58:27 -- accel/accel.sh@40 -- # local IFS=, 00:07:13.154 03:58:27 -- accel/accel.sh@41 -- # jq -r . 00:07:13.154 ************************************ 00:07:13.154 START TEST accel_dif_functional_tests 00:07:13.154 ************************************ 00:07:13.154 03:58:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:13.154 [2024-04-19 03:58:27.603017] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:13.154 [2024-04-19 03:58:27.603050] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151283 ] 00:07:13.154 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.154 [2024-04-19 03:58:27.652231] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.413 [2024-04-19 03:58:27.720650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.413 [2024-04-19 03:58:27.720745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.413 [2024-04-19 03:58:27.720745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.413 00:07:13.413 00:07:13.413 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.413 http://cunit.sourceforge.net/ 00:07:13.413 00:07:13.413 00:07:13.413 Suite: accel_dif 00:07:13.413 Test: verify: DIF generated, GUARD check ...passed 00:07:13.413 Test: verify: DIF generated, APPTAG check ...passed 00:07:13.413 Test: verify: DIF generated, REFTAG check ...passed 00:07:13.413 Test: verify: DIF not generated, GUARD check ...[2024-04-19 03:58:27.787466] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:13.414 [2024-04-19 03:58:27.787506] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:13.414 passed 00:07:13.414 Test: verify: DIF not generated, APPTAG check ...[2024-04-19 03:58:27.787534] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:13.414 [2024-04-19 03:58:27.787547] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:13.414 passed 00:07:13.414 Test: verify: DIF not generated, REFTAG check ...[2024-04-19 03:58:27.787564] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:13.414 [2024-04-19 03:58:27.787580] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:13.414 passed 00:07:13.414 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:13.414 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-19 03:58:27.787617] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:13.414 passed 00:07:13.414 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:13.414 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:13.414 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:13.414 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-19 03:58:27.787707] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:13.414 passed 00:07:13.414 Test: generate copy: DIF generated, GUARD check ...passed 00:07:13.414 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:13.414 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:13.414 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:13.414 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:13.414 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:13.414 Test: generate copy: iovecs-len validate ...[2024-04-19 03:58:27.787857] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:13.414 passed 00:07:13.414 Test: generate copy: buffer alignment validate ...passed 00:07:13.414 00:07:13.414 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.414 suites 1 1 n/a 0 0 00:07:13.414 tests 20 20 20 0 0 00:07:13.414 asserts 204 204 204 0 n/a 00:07:13.414 00:07:13.414 Elapsed time = 0.002 seconds 00:07:13.674 00:07:13.674 real 0m0.407s 00:07:13.674 user 0m0.585s 00:07:13.674 sys 0m0.132s 00:07:13.674 03:58:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:13.674 03:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.674 ************************************ 00:07:13.674 END TEST accel_dif_functional_tests 00:07:13.674 ************************************ 00:07:13.674 00:07:13.674 real 0m33.147s 00:07:13.674 user 0m35.395s 00:07:13.674 sys 0m5.128s 00:07:13.674 03:58:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:13.674 03:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.674 ************************************ 00:07:13.674 END TEST accel 00:07:13.674 ************************************ 00:07:13.674 03:58:28 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:13.674 03:58:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.674 03:58:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.674 03:58:28 -- common/autotest_common.sh@10 -- # set +x 00:07:13.674 ************************************ 00:07:13.674 START TEST accel_rpc 00:07:13.674 ************************************ 00:07:13.674 03:58:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:13.933 * Looking for test storage... 00:07:13.933 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:13.933 03:58:28 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:13.933 03:58:28 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=151542 00:07:13.933 03:58:28 -- accel/accel_rpc.sh@15 -- # waitforlisten 151542 00:07:13.933 03:58:28 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:13.933 03:58:28 -- common/autotest_common.sh@817 -- # '[' -z 151542 ']' 00:07:13.933 03:58:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.933 03:58:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.933 03:58:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.933 03:58:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.933 03:58:28 -- common/autotest_common.sh@10 -- # set +x 00:07:13.933 [2024-04-19 03:58:28.292664] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:13.933 [2024-04-19 03:58:28.292709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151542 ] 00:07:13.933 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.933 [2024-04-19 03:58:28.342454] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.933 [2024-04-19 03:58:28.415804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.871 03:58:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:14.871 03:58:29 -- common/autotest_common.sh@850 -- # return 0 00:07:14.871 03:58:29 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:14.871 03:58:29 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:14.871 03:58:29 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:14.871 03:58:29 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:14.871 03:58:29 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:14.871 03:58:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.871 03:58:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.871 03:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:14.871 ************************************ 00:07:14.871 START TEST accel_assign_opcode 00:07:14.871 ************************************ 00:07:14.871 03:58:29 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:07:14.871 03:58:29 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:14.871 03:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.871 03:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:14.871 [2024-04-19 03:58:29.198015] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:14.871 03:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.871 03:58:29 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:14.871 03:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.871 03:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:14.871 [2024-04-19 03:58:29.206024] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:14.871 03:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.871 03:58:29 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:14.871 03:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.871 03:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:14.871 03:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.871 03:58:29 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:14.871 03:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.871 03:58:29 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:14.871 03:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:14.871 03:58:29 -- accel/accel_rpc.sh@42 -- # grep software 00:07:15.130 03:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:15.130 software 00:07:15.130 00:07:15.130 real 0m0.238s 00:07:15.130 user 0m0.045s 00:07:15.130 sys 0m0.010s 00:07:15.130 03:58:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:15.130 03:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.130 ************************************ 00:07:15.130 END TEST accel_assign_opcode 00:07:15.130 ************************************ 00:07:15.130 03:58:29 -- accel/accel_rpc.sh@55 -- # killprocess 151542 00:07:15.130 03:58:29 -- common/autotest_common.sh@936 -- # '[' -z 151542 ']' 00:07:15.130 03:58:29 -- common/autotest_common.sh@940 -- # kill -0 151542 00:07:15.130 03:58:29 -- common/autotest_common.sh@941 -- # uname 00:07:15.130 03:58:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.130 03:58:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151542 00:07:15.130 03:58:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.130 03:58:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.130 03:58:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151542' 00:07:15.130 killing process with pid 151542 00:07:15.130 03:58:29 -- common/autotest_common.sh@955 -- # kill 151542 00:07:15.130 03:58:29 -- common/autotest_common.sh@960 -- # wait 151542 00:07:15.389 00:07:15.389 real 0m1.677s 00:07:15.389 user 0m1.747s 00:07:15.389 sys 0m0.471s 00:07:15.389 03:58:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:15.389 03:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.389 ************************************ 00:07:15.389 END TEST accel_rpc 00:07:15.389 ************************************ 00:07:15.389 03:58:29 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:15.389 03:58:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:15.389 03:58:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.389 03:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.648 ************************************ 00:07:15.648 START TEST app_cmdline 00:07:15.648 ************************************ 00:07:15.648 03:58:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:15.648 * Looking for test storage... 00:07:15.648 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:15.648 03:58:30 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:15.648 03:58:30 -- app/cmdline.sh@17 -- # spdk_tgt_pid=151891 00:07:15.648 03:58:30 -- app/cmdline.sh@18 -- # waitforlisten 151891 00:07:15.648 03:58:30 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:15.648 03:58:30 -- common/autotest_common.sh@817 -- # '[' -z 151891 ']' 00:07:15.648 03:58:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.648 03:58:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:15.648 03:58:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.648 03:58:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:15.648 03:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:15.648 [2024-04-19 03:58:30.128979] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:15.648 [2024-04-19 03:58:30.129025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151891 ] 00:07:15.648 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.082 [2024-04-19 03:58:30.179576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.082 [2024-04-19 03:58:30.251775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.650 03:58:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:16.650 03:58:30 -- common/autotest_common.sh@850 -- # return 0 00:07:16.650 03:58:30 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:16.650 { 00:07:16.650 "version": "SPDK v24.05-pre git sha1 77a84e60e", 00:07:16.650 "fields": { 00:07:16.650 "major": 24, 00:07:16.650 "minor": 5, 00:07:16.650 "patch": 0, 00:07:16.650 "suffix": "-pre", 00:07:16.650 "commit": "77a84e60e" 00:07:16.650 } 00:07:16.650 } 00:07:16.650 03:58:31 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:16.650 03:58:31 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:16.650 03:58:31 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:16.650 03:58:31 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:16.650 03:58:31 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:16.650 03:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.650 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:16.650 03:58:31 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:16.650 03:58:31 -- app/cmdline.sh@26 -- # sort 00:07:16.650 03:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.650 03:58:31 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:16.650 03:58:31 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:16.650 03:58:31 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.650 03:58:31 -- common/autotest_common.sh@638 -- # local es=0 00:07:16.650 03:58:31 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.650 03:58:31 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:16.650 03:58:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:16.650 03:58:31 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:16.650 03:58:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:16.650 03:58:31 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:16.650 03:58:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:16.650 03:58:31 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:16.650 03:58:31 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:16.650 03:58:31 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.910 request: 00:07:16.910 { 00:07:16.910 "method": "env_dpdk_get_mem_stats", 00:07:16.910 "req_id": 1 00:07:16.910 } 00:07:16.910 Got JSON-RPC error response 00:07:16.910 response: 00:07:16.910 { 00:07:16.910 "code": -32601, 00:07:16.910 "message": "Method not found" 00:07:16.910 } 00:07:16.910 03:58:31 -- common/autotest_common.sh@641 -- # es=1 00:07:16.910 03:58:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:16.910 03:58:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:16.910 03:58:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:16.910 03:58:31 -- app/cmdline.sh@1 -- # killprocess 151891 00:07:16.910 03:58:31 -- common/autotest_common.sh@936 -- # '[' -z 151891 ']' 00:07:16.910 03:58:31 -- common/autotest_common.sh@940 -- # kill -0 151891 00:07:16.910 03:58:31 -- common/autotest_common.sh@941 -- # uname 00:07:16.910 03:58:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:16.910 03:58:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151891 00:07:16.910 03:58:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:16.910 03:58:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:16.910 03:58:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151891' 00:07:16.910 killing process with pid 151891 00:07:16.910 03:58:31 -- common/autotest_common.sh@955 -- # kill 151891 00:07:16.910 03:58:31 -- common/autotest_common.sh@960 -- # wait 151891 00:07:17.169 00:07:17.170 real 0m1.609s 00:07:17.170 user 0m1.885s 00:07:17.170 sys 0m0.397s 00:07:17.170 03:58:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.170 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.170 ************************************ 00:07:17.170 END TEST app_cmdline 00:07:17.170 ************************************ 00:07:17.170 03:58:31 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:17.170 03:58:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.170 03:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.170 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.429 ************************************ 00:07:17.429 START TEST version 00:07:17.429 ************************************ 00:07:17.429 03:58:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:17.429 * Looking for test storage... 00:07:17.429 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:17.429 03:58:31 -- app/version.sh@17 -- # get_header_version major 00:07:17.429 03:58:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:17.429 03:58:31 -- app/version.sh@14 -- # tr -d '"' 00:07:17.429 03:58:31 -- app/version.sh@14 -- # cut -f2 00:07:17.429 03:58:31 -- app/version.sh@17 -- # major=24 00:07:17.429 03:58:31 -- app/version.sh@18 -- # get_header_version minor 00:07:17.429 03:58:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:17.429 03:58:31 -- app/version.sh@14 -- # cut -f2 00:07:17.429 03:58:31 -- app/version.sh@14 -- # tr -d '"' 00:07:17.429 03:58:31 -- app/version.sh@18 -- # minor=5 00:07:17.429 03:58:31 -- app/version.sh@19 -- # get_header_version patch 00:07:17.429 03:58:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:17.429 03:58:31 -- app/version.sh@14 -- # cut -f2 00:07:17.429 03:58:31 -- app/version.sh@14 -- # tr -d '"' 00:07:17.429 03:58:31 -- app/version.sh@19 -- # patch=0 00:07:17.429 03:58:31 -- app/version.sh@20 -- # get_header_version suffix 00:07:17.429 03:58:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:17.429 03:58:31 -- app/version.sh@14 -- # cut -f2 00:07:17.429 03:58:31 -- app/version.sh@14 -- # tr -d '"' 00:07:17.429 03:58:31 -- app/version.sh@20 -- # suffix=-pre 00:07:17.429 03:58:31 -- app/version.sh@22 -- # version=24.5 00:07:17.429 03:58:31 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:17.429 03:58:31 -- app/version.sh@28 -- # version=24.5rc0 00:07:17.429 03:58:31 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:17.429 03:58:31 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:17.429 03:58:31 -- app/version.sh@30 -- # py_version=24.5rc0 00:07:17.429 03:58:31 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:17.429 00:07:17.429 real 0m0.158s 00:07:17.429 user 0m0.073s 00:07:17.429 sys 0m0.120s 00:07:17.429 03:58:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.429 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.429 ************************************ 00:07:17.429 END TEST version 00:07:17.429 ************************************ 00:07:17.429 03:58:31 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:17.429 03:58:31 -- spdk/autotest.sh@194 -- # uname -s 00:07:17.689 03:58:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:17.689 03:58:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:17.689 03:58:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:17.689 03:58:31 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:17.689 03:58:31 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:07:17.689 03:58:31 -- spdk/autotest.sh@258 -- # timing_exit lib 00:07:17.689 03:58:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:17.689 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.689 03:58:31 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:17.689 03:58:31 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:07:17.689 03:58:31 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:07:17.689 03:58:31 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:07:17.689 03:58:31 -- spdk/autotest.sh@281 -- # '[' rdma = rdma ']' 00:07:17.689 03:58:31 -- spdk/autotest.sh@282 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:17.689 03:58:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:17.689 03:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.689 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.689 ************************************ 00:07:17.689 START TEST nvmf_rdma 00:07:17.689 ************************************ 00:07:17.689 03:58:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:17.689 * Looking for test storage... 00:07:17.689 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:17.689 03:58:32 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:17.689 03:58:32 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:17.689 03:58:32 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.689 03:58:32 -- nvmf/common.sh@7 -- # uname -s 00:07:17.689 03:58:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.689 03:58:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.689 03:58:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.689 03:58:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.689 03:58:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.689 03:58:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.689 03:58:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.949 03:58:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.949 03:58:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.949 03:58:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.949 03:58:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:17.949 03:58:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:17.949 03:58:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.949 03:58:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.949 03:58:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.949 03:58:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.949 03:58:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:17.949 03:58:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.949 03:58:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.949 03:58:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.949 03:58:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.949 03:58:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.949 03:58:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.949 03:58:32 -- paths/export.sh@5 -- # export PATH 00:07:17.949 03:58:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.949 03:58:32 -- nvmf/common.sh@47 -- # : 0 00:07:17.949 03:58:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.949 03:58:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.949 03:58:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.949 03:58:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.949 03:58:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.949 03:58:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.949 03:58:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.950 03:58:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.950 03:58:32 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:17.950 03:58:32 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:17.950 03:58:32 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:17.950 03:58:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:17.950 03:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.950 03:58:32 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:17.950 03:58:32 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:17.950 03:58:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:17.950 03:58:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.950 03:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:17.950 ************************************ 00:07:17.950 START TEST nvmf_example 00:07:17.950 ************************************ 00:07:17.950 03:58:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:17.950 * Looking for test storage... 00:07:17.950 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:17.950 03:58:32 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.950 03:58:32 -- nvmf/common.sh@7 -- # uname -s 00:07:17.950 03:58:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.950 03:58:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.950 03:58:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.950 03:58:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.950 03:58:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.950 03:58:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.950 03:58:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.950 03:58:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.950 03:58:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.950 03:58:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.950 03:58:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:18.221 03:58:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:18.221 03:58:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.221 03:58:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.221 03:58:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.221 03:58:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.221 03:58:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:18.221 03:58:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.221 03:58:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.221 03:58:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.221 03:58:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.221 03:58:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.221 03:58:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.221 03:58:32 -- paths/export.sh@5 -- # export PATH 00:07:18.221 03:58:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.221 03:58:32 -- nvmf/common.sh@47 -- # : 0 00:07:18.221 03:58:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:18.221 03:58:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:18.221 03:58:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.221 03:58:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.221 03:58:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.221 03:58:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:18.221 03:58:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:18.221 03:58:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:18.221 03:58:32 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:18.221 03:58:32 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:18.221 03:58:32 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:18.221 03:58:32 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:18.221 03:58:32 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:18.221 03:58:32 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:18.221 03:58:32 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:18.221 03:58:32 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:18.221 03:58:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:18.221 03:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.221 03:58:32 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:18.221 03:58:32 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:18.221 03:58:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.221 03:58:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:18.221 03:58:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:18.221 03:58:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:18.221 03:58:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.221 03:58:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:18.221 03:58:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.221 03:58:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:18.221 03:58:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:18.221 03:58:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:18.221 03:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:23.503 03:58:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:23.503 03:58:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:23.503 03:58:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:23.503 03:58:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:23.503 03:58:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:23.503 03:58:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:23.503 03:58:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:23.503 03:58:37 -- nvmf/common.sh@295 -- # net_devs=() 00:07:23.503 03:58:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:23.503 03:58:37 -- nvmf/common.sh@296 -- # e810=() 00:07:23.503 03:58:37 -- nvmf/common.sh@296 -- # local -ga e810 00:07:23.503 03:58:37 -- nvmf/common.sh@297 -- # x722=() 00:07:23.503 03:58:37 -- nvmf/common.sh@297 -- # local -ga x722 00:07:23.503 03:58:37 -- nvmf/common.sh@298 -- # mlx=() 00:07:23.503 03:58:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:23.503 03:58:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.503 03:58:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.503 03:58:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.503 03:58:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.503 03:58:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.503 03:58:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.503 03:58:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.503 03:58:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.503 03:58:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.503 03:58:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.503 03:58:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.503 03:58:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:23.503 03:58:37 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:23.503 03:58:37 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:23.503 03:58:37 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:23.503 03:58:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:23.503 03:58:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.503 03:58:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:23.503 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:23.503 03:58:37 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:23.503 03:58:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.503 03:58:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:23.503 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:23.503 03:58:37 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:23.503 03:58:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:23.503 03:58:37 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.503 03:58:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.503 03:58:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:23.503 03:58:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.503 03:58:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:23.503 Found net devices under 0000:18:00.0: mlx_0_0 00:07:23.503 03:58:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.503 03:58:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.503 03:58:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.503 03:58:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:23.503 03:58:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.503 03:58:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:23.503 Found net devices under 0000:18:00.1: mlx_0_1 00:07:23.503 03:58:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.503 03:58:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:23.503 03:58:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:23.503 03:58:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:23.503 03:58:37 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:23.503 03:58:37 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:23.503 03:58:37 -- nvmf/common.sh@58 -- # uname 00:07:23.504 03:58:37 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:23.504 03:58:37 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:23.504 03:58:37 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:23.504 03:58:37 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:23.504 03:58:37 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:23.504 03:58:37 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:23.504 03:58:37 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:23.504 03:58:37 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:23.504 03:58:37 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:23.504 03:58:37 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:23.504 03:58:37 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:23.504 03:58:37 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:23.504 03:58:37 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:23.504 03:58:37 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:23.504 03:58:37 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:23.504 03:58:37 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:23.504 03:58:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:23.504 03:58:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.504 03:58:37 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:23.504 03:58:37 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:23.504 03:58:37 -- nvmf/common.sh@105 -- # continue 2 00:07:23.504 03:58:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:23.504 03:58:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.504 03:58:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:23.504 03:58:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.504 03:58:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:23.504 03:58:37 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:23.504 03:58:37 -- nvmf/common.sh@105 -- # continue 2 00:07:23.504 03:58:37 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:23.504 03:58:37 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:23.504 03:58:37 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:23.504 03:58:37 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:23.504 03:58:37 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:23.504 03:58:37 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:23.504 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:23.504 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:23.504 altname enp24s0f0np0 00:07:23.504 altname ens785f0np0 00:07:23.504 inet 192.168.100.8/24 scope global mlx_0_0 00:07:23.504 valid_lft forever preferred_lft forever 00:07:23.504 03:58:37 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:23.504 03:58:37 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:23.504 03:58:37 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:23.504 03:58:37 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:23.504 03:58:37 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:23.504 03:58:37 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:23.504 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:23.504 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:23.504 altname enp24s0f1np1 00:07:23.504 altname ens785f1np1 00:07:23.504 inet 192.168.100.9/24 scope global mlx_0_1 00:07:23.504 valid_lft forever preferred_lft forever 00:07:23.504 03:58:37 -- nvmf/common.sh@411 -- # return 0 00:07:23.504 03:58:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:23.504 03:58:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:23.504 03:58:37 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:23.504 03:58:37 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:23.504 03:58:37 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:23.504 03:58:37 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:23.504 03:58:37 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:23.504 03:58:37 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:23.504 03:58:37 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:23.504 03:58:37 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:23.504 03:58:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:23.504 03:58:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.504 03:58:37 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:23.504 03:58:37 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:23.504 03:58:37 -- nvmf/common.sh@105 -- # continue 2 00:07:23.504 03:58:37 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:23.504 03:58:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.504 03:58:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:23.504 03:58:37 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:23.504 03:58:37 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:23.504 03:58:37 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:23.504 03:58:37 -- nvmf/common.sh@105 -- # continue 2 00:07:23.504 03:58:37 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:23.504 03:58:37 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:23.504 03:58:37 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:23.504 03:58:37 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:23.504 03:58:37 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:23.504 03:58:37 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:23.504 03:58:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:23.504 03:58:37 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:23.504 192.168.100.9' 00:07:23.504 03:58:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:23.504 192.168.100.9' 00:07:23.504 03:58:37 -- nvmf/common.sh@446 -- # head -n 1 00:07:23.504 03:58:37 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:23.504 03:58:37 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:23.504 192.168.100.9' 00:07:23.504 03:58:37 -- nvmf/common.sh@447 -- # tail -n +2 00:07:23.504 03:58:37 -- nvmf/common.sh@447 -- # head -n 1 00:07:23.504 03:58:37 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:23.504 03:58:37 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:23.504 03:58:37 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:23.504 03:58:37 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:23.504 03:58:37 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:23.504 03:58:37 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:23.504 03:58:37 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:23.504 03:58:37 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:23.504 03:58:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:23.504 03:58:37 -- common/autotest_common.sh@10 -- # set +x 00:07:23.504 03:58:37 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:23.504 03:58:37 -- target/nvmf_example.sh@34 -- # nvmfpid=155798 00:07:23.504 03:58:37 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.504 03:58:37 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:23.504 03:58:37 -- target/nvmf_example.sh@36 -- # waitforlisten 155798 00:07:23.504 03:58:37 -- common/autotest_common.sh@817 -- # '[' -z 155798 ']' 00:07:23.504 03:58:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.504 03:58:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:23.504 03:58:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.504 03:58:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:23.504 03:58:37 -- common/autotest_common.sh@10 -- # set +x 00:07:23.504 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.444 03:58:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:24.444 03:58:38 -- common/autotest_common.sh@850 -- # return 0 00:07:24.444 03:58:38 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:24.444 03:58:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:24.444 03:58:38 -- common/autotest_common.sh@10 -- # set +x 00:07:24.444 03:58:38 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:24.444 03:58:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.444 03:58:38 -- common/autotest_common.sh@10 -- # set +x 00:07:24.704 03:58:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.704 03:58:39 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:24.704 03:58:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.704 03:58:39 -- common/autotest_common.sh@10 -- # set +x 00:07:24.704 03:58:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.704 03:58:39 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:24.704 03:58:39 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:24.704 03:58:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.704 03:58:39 -- common/autotest_common.sh@10 -- # set +x 00:07:24.704 03:58:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.704 03:58:39 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:24.704 03:58:39 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:24.704 03:58:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.704 03:58:39 -- common/autotest_common.sh@10 -- # set +x 00:07:24.704 03:58:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.704 03:58:39 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:24.704 03:58:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.704 03:58:39 -- common/autotest_common.sh@10 -- # set +x 00:07:24.704 03:58:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.704 03:58:39 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:24.704 03:58:39 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:24.704 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.931 Initializing NVMe Controllers 00:07:36.931 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:36.931 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:36.931 Initialization complete. Launching workers. 00:07:36.931 ======================================================== 00:07:36.931 Latency(us) 00:07:36.931 Device Information : IOPS MiB/s Average min max 00:07:36.931 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 27570.81 107.70 2321.18 570.53 12042.18 00:07:36.931 ======================================================== 00:07:36.931 Total : 27570.81 107.70 2321.18 570.53 12042.18 00:07:36.931 00:07:36.931 03:58:50 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:36.931 03:58:50 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:36.931 03:58:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:36.931 03:58:50 -- nvmf/common.sh@117 -- # sync 00:07:36.931 03:58:50 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:36.931 03:58:50 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:36.931 03:58:50 -- nvmf/common.sh@120 -- # set +e 00:07:36.931 03:58:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.931 03:58:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:36.931 rmmod nvme_rdma 00:07:36.931 rmmod nvme_fabrics 00:07:36.931 03:58:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.931 03:58:50 -- nvmf/common.sh@124 -- # set -e 00:07:36.931 03:58:50 -- nvmf/common.sh@125 -- # return 0 00:07:36.931 03:58:50 -- nvmf/common.sh@478 -- # '[' -n 155798 ']' 00:07:36.931 03:58:50 -- nvmf/common.sh@479 -- # killprocess 155798 00:07:36.931 03:58:50 -- common/autotest_common.sh@936 -- # '[' -z 155798 ']' 00:07:36.931 03:58:50 -- common/autotest_common.sh@940 -- # kill -0 155798 00:07:36.931 03:58:50 -- common/autotest_common.sh@941 -- # uname 00:07:36.931 03:58:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:36.931 03:58:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 155798 00:07:36.931 03:58:50 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:36.931 03:58:50 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:36.931 03:58:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 155798' 00:07:36.931 killing process with pid 155798 00:07:36.931 03:58:50 -- common/autotest_common.sh@955 -- # kill 155798 00:07:36.931 03:58:50 -- common/autotest_common.sh@960 -- # wait 155798 00:07:36.931 nvmf threads initialize successfully 00:07:36.931 bdev subsystem init successfully 00:07:36.931 created a nvmf target service 00:07:36.931 create targets's poll groups done 00:07:36.931 all subsystems of target started 00:07:36.931 nvmf target is running 00:07:36.931 all subsystems of target stopped 00:07:36.931 destroy targets's poll groups done 00:07:36.931 destroyed the nvmf target service 00:07:36.931 bdev subsystem finish successfully 00:07:36.931 nvmf threads destroy successfully 00:07:36.931 03:58:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:36.931 03:58:50 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:36.931 03:58:50 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:36.931 03:58:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:36.931 03:58:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.931 00:07:36.931 real 0m18.246s 00:07:36.931 user 0m51.405s 00:07:36.931 sys 0m4.548s 00:07:36.931 03:58:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.931 03:58:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.931 ************************************ 00:07:36.931 END TEST nvmf_example 00:07:36.931 ************************************ 00:07:36.931 03:58:50 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:36.931 03:58:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:36.931 03:58:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.931 03:58:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.931 ************************************ 00:07:36.931 START TEST nvmf_filesystem 00:07:36.931 ************************************ 00:07:36.931 03:58:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:36.931 * Looking for test storage... 00:07:36.931 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:36.931 03:58:50 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:36.931 03:58:50 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:36.931 03:58:50 -- common/autotest_common.sh@34 -- # set -e 00:07:36.931 03:58:50 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:36.931 03:58:50 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:36.931 03:58:50 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:36.931 03:58:50 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:36.931 03:58:50 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:36.931 03:58:50 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:36.931 03:58:50 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:36.931 03:58:50 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:36.931 03:58:50 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:36.931 03:58:50 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:36.931 03:58:50 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:36.931 03:58:50 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:36.931 03:58:50 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:36.931 03:58:50 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:36.931 03:58:50 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:36.931 03:58:50 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:36.931 03:58:50 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:36.931 03:58:50 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:36.931 03:58:50 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:36.931 03:58:50 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:36.931 03:58:50 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:36.931 03:58:50 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:36.931 03:58:50 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:36.931 03:58:50 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:36.931 03:58:50 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:36.931 03:58:50 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:36.931 03:58:50 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:36.931 03:58:50 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:36.931 03:58:50 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:36.931 03:58:50 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:36.931 03:58:50 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:36.931 03:58:50 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:36.931 03:58:50 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:36.931 03:58:50 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:36.931 03:58:50 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:36.931 03:58:50 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:36.931 03:58:50 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:36.931 03:58:50 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:36.931 03:58:50 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:36.931 03:58:50 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:36.931 03:58:50 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:36.931 03:58:50 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:36.931 03:58:50 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:36.931 03:58:50 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:36.931 03:58:50 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:36.931 03:58:50 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:36.931 03:58:50 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:36.931 03:58:50 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:36.931 03:58:50 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:36.932 03:58:50 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:36.932 03:58:50 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:36.932 03:58:50 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:36.932 03:58:50 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:36.932 03:58:50 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:36.932 03:58:50 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:36.932 03:58:50 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:36.932 03:58:50 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:36.932 03:58:50 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:36.932 03:58:50 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:36.932 03:58:50 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:36.932 03:58:50 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:36.932 03:58:50 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:36.932 03:58:50 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:36.932 03:58:50 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:36.932 03:58:50 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:36.932 03:58:50 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:36.932 03:58:50 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:07:36.932 03:58:50 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:36.932 03:58:50 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:36.932 03:58:50 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:36.932 03:58:50 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:36.932 03:58:50 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:36.932 03:58:50 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:36.932 03:58:50 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:36.932 03:58:50 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:36.932 03:58:50 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:36.932 03:58:50 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:36.932 03:58:50 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:36.932 03:58:50 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:36.932 03:58:50 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:36.932 03:58:50 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:36.932 03:58:50 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:36.932 03:58:50 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:36.932 03:58:50 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:36.932 03:58:50 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:36.932 03:58:50 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:36.932 03:58:50 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:36.932 03:58:50 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:36.932 03:58:50 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:36.932 03:58:50 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:36.932 03:58:50 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:36.932 03:58:50 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:36.932 03:58:50 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:36.932 03:58:50 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:36.932 03:58:50 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:36.932 03:58:50 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:36.932 03:58:50 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:36.932 03:58:50 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:36.932 03:58:50 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:36.932 03:58:50 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:36.932 03:58:50 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:36.932 03:58:50 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:36.932 03:58:50 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:36.932 #define SPDK_CONFIG_H 00:07:36.932 #define SPDK_CONFIG_APPS 1 00:07:36.932 #define SPDK_CONFIG_ARCH native 00:07:36.932 #undef SPDK_CONFIG_ASAN 00:07:36.932 #undef SPDK_CONFIG_AVAHI 00:07:36.932 #undef SPDK_CONFIG_CET 00:07:36.932 #define SPDK_CONFIG_COVERAGE 1 00:07:36.932 #define SPDK_CONFIG_CROSS_PREFIX 00:07:36.932 #undef SPDK_CONFIG_CRYPTO 00:07:36.932 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:36.932 #undef SPDK_CONFIG_CUSTOMOCF 00:07:36.932 #undef SPDK_CONFIG_DAOS 00:07:36.932 #define SPDK_CONFIG_DAOS_DIR 00:07:36.932 #define SPDK_CONFIG_DEBUG 1 00:07:36.932 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:36.932 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:36.932 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:36.932 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:36.932 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:36.932 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:36.932 #define SPDK_CONFIG_EXAMPLES 1 00:07:36.932 #undef SPDK_CONFIG_FC 00:07:36.932 #define SPDK_CONFIG_FC_PATH 00:07:36.932 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:36.932 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:36.932 #undef SPDK_CONFIG_FUSE 00:07:36.932 #undef SPDK_CONFIG_FUZZER 00:07:36.932 #define SPDK_CONFIG_FUZZER_LIB 00:07:36.932 #undef SPDK_CONFIG_GOLANG 00:07:36.932 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:36.932 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:36.932 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:36.932 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:36.932 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:36.932 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:36.932 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:36.932 #define SPDK_CONFIG_IDXD 1 00:07:36.932 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:36.932 #undef SPDK_CONFIG_IPSEC_MB 00:07:36.932 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:36.932 #define SPDK_CONFIG_ISAL 1 00:07:36.932 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:36.932 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:36.932 #define SPDK_CONFIG_LIBDIR 00:07:36.932 #undef SPDK_CONFIG_LTO 00:07:36.932 #define SPDK_CONFIG_MAX_LCORES 00:07:36.932 #define SPDK_CONFIG_NVME_CUSE 1 00:07:36.932 #undef SPDK_CONFIG_OCF 00:07:36.932 #define SPDK_CONFIG_OCF_PATH 00:07:36.932 #define SPDK_CONFIG_OPENSSL_PATH 00:07:36.932 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:36.932 #define SPDK_CONFIG_PGO_DIR 00:07:36.932 #undef SPDK_CONFIG_PGO_USE 00:07:36.932 #define SPDK_CONFIG_PREFIX /usr/local 00:07:36.932 #undef SPDK_CONFIG_RAID5F 00:07:36.932 #undef SPDK_CONFIG_RBD 00:07:36.932 #define SPDK_CONFIG_RDMA 1 00:07:36.932 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:36.932 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:36.932 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:36.932 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:36.932 #define SPDK_CONFIG_SHARED 1 00:07:36.932 #undef SPDK_CONFIG_SMA 00:07:36.932 #define SPDK_CONFIG_TESTS 1 00:07:36.932 #undef SPDK_CONFIG_TSAN 00:07:36.932 #define SPDK_CONFIG_UBLK 1 00:07:36.932 #define SPDK_CONFIG_UBSAN 1 00:07:36.932 #undef SPDK_CONFIG_UNIT_TESTS 00:07:36.932 #undef SPDK_CONFIG_URING 00:07:36.932 #define SPDK_CONFIG_URING_PATH 00:07:36.932 #undef SPDK_CONFIG_URING_ZNS 00:07:36.932 #undef SPDK_CONFIG_USDT 00:07:36.932 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:36.932 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:36.932 #undef SPDK_CONFIG_VFIO_USER 00:07:36.932 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:36.932 #define SPDK_CONFIG_VHOST 1 00:07:36.932 #define SPDK_CONFIG_VIRTIO 1 00:07:36.932 #undef SPDK_CONFIG_VTUNE 00:07:36.932 #define SPDK_CONFIG_VTUNE_DIR 00:07:36.932 #define SPDK_CONFIG_WERROR 1 00:07:36.932 #define SPDK_CONFIG_WPDK_DIR 00:07:36.932 #undef SPDK_CONFIG_XNVME 00:07:36.932 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:36.932 03:58:50 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:36.932 03:58:50 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:36.932 03:58:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.932 03:58:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.932 03:58:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.932 03:58:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.932 03:58:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.933 03:58:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.933 03:58:50 -- paths/export.sh@5 -- # export PATH 00:07:36.933 03:58:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.933 03:58:50 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:36.933 03:58:50 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:36.933 03:58:50 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:36.933 03:58:50 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:36.933 03:58:50 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:36.933 03:58:50 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:36.933 03:58:50 -- pm/common@67 -- # TEST_TAG=N/A 00:07:36.933 03:58:50 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:36.933 03:58:50 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:36.933 03:58:50 -- pm/common@71 -- # uname -s 00:07:36.933 03:58:50 -- pm/common@71 -- # PM_OS=Linux 00:07:36.933 03:58:50 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:36.933 03:58:50 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:07:36.933 03:58:50 -- pm/common@76 -- # [[ Linux == Linux ]] 00:07:36.933 03:58:50 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:07:36.933 03:58:50 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:07:36.933 03:58:50 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:36.933 03:58:50 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:36.933 03:58:50 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:07:36.933 03:58:50 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:07:36.933 03:58:50 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:36.933 03:58:50 -- common/autotest_common.sh@57 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:36.933 03:58:50 -- common/autotest_common.sh@61 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:36.933 03:58:50 -- common/autotest_common.sh@63 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:36.933 03:58:50 -- common/autotest_common.sh@65 -- # : 1 00:07:36.933 03:58:50 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:36.933 03:58:50 -- common/autotest_common.sh@67 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:36.933 03:58:50 -- common/autotest_common.sh@69 -- # : 00:07:36.933 03:58:50 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:36.933 03:58:50 -- common/autotest_common.sh@71 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:36.933 03:58:50 -- common/autotest_common.sh@73 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:36.933 03:58:50 -- common/autotest_common.sh@75 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:36.933 03:58:50 -- common/autotest_common.sh@77 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:36.933 03:58:50 -- common/autotest_common.sh@79 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:36.933 03:58:50 -- common/autotest_common.sh@81 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:36.933 03:58:50 -- common/autotest_common.sh@83 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:36.933 03:58:50 -- common/autotest_common.sh@85 -- # : 1 00:07:36.933 03:58:50 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:36.933 03:58:50 -- common/autotest_common.sh@87 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:36.933 03:58:50 -- common/autotest_common.sh@89 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:36.933 03:58:50 -- common/autotest_common.sh@91 -- # : 1 00:07:36.933 03:58:50 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:36.933 03:58:50 -- common/autotest_common.sh@93 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:36.933 03:58:50 -- common/autotest_common.sh@95 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:36.933 03:58:50 -- common/autotest_common.sh@97 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:36.933 03:58:50 -- common/autotest_common.sh@99 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:36.933 03:58:50 -- common/autotest_common.sh@101 -- # : rdma 00:07:36.933 03:58:50 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:36.933 03:58:50 -- common/autotest_common.sh@103 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:36.933 03:58:50 -- common/autotest_common.sh@105 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:36.933 03:58:50 -- common/autotest_common.sh@107 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:36.933 03:58:50 -- common/autotest_common.sh@109 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:36.933 03:58:50 -- common/autotest_common.sh@111 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:36.933 03:58:50 -- common/autotest_common.sh@113 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:36.933 03:58:50 -- common/autotest_common.sh@115 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:36.933 03:58:50 -- common/autotest_common.sh@117 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:36.933 03:58:50 -- common/autotest_common.sh@119 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:36.933 03:58:50 -- common/autotest_common.sh@121 -- # : 1 00:07:36.933 03:58:50 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:36.933 03:58:50 -- common/autotest_common.sh@123 -- # : 00:07:36.933 03:58:50 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:36.933 03:58:50 -- common/autotest_common.sh@125 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:36.933 03:58:50 -- common/autotest_common.sh@127 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:36.933 03:58:50 -- common/autotest_common.sh@129 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:36.933 03:58:50 -- common/autotest_common.sh@131 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:36.933 03:58:50 -- common/autotest_common.sh@133 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:36.933 03:58:50 -- common/autotest_common.sh@135 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:36.933 03:58:50 -- common/autotest_common.sh@137 -- # : 00:07:36.933 03:58:50 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:36.933 03:58:50 -- common/autotest_common.sh@139 -- # : true 00:07:36.933 03:58:50 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:36.933 03:58:50 -- common/autotest_common.sh@141 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:36.933 03:58:50 -- common/autotest_common.sh@143 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:36.933 03:58:50 -- common/autotest_common.sh@145 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:36.933 03:58:50 -- common/autotest_common.sh@147 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:36.933 03:58:50 -- common/autotest_common.sh@149 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:36.933 03:58:50 -- common/autotest_common.sh@151 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:36.933 03:58:50 -- common/autotest_common.sh@153 -- # : mlx5 00:07:36.933 03:58:50 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:36.933 03:58:50 -- common/autotest_common.sh@155 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:36.933 03:58:50 -- common/autotest_common.sh@157 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:36.933 03:58:50 -- common/autotest_common.sh@159 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:36.933 03:58:50 -- common/autotest_common.sh@161 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:36.933 03:58:50 -- common/autotest_common.sh@163 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:36.933 03:58:50 -- common/autotest_common.sh@166 -- # : 00:07:36.933 03:58:50 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:36.933 03:58:50 -- common/autotest_common.sh@168 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:36.933 03:58:50 -- common/autotest_common.sh@170 -- # : 0 00:07:36.933 03:58:50 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:36.933 03:58:50 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:36.933 03:58:50 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:36.933 03:58:50 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:36.933 03:58:50 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:36.933 03:58:50 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.933 03:58:50 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.934 03:58:50 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.934 03:58:50 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:36.934 03:58:50 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:36.934 03:58:50 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:36.934 03:58:50 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:36.934 03:58:50 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:36.934 03:58:50 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:36.934 03:58:50 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:36.934 03:58:50 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:36.934 03:58:50 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:36.934 03:58:50 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:36.934 03:58:50 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:36.934 03:58:50 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:36.934 03:58:50 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:36.934 03:58:50 -- common/autotest_common.sh@199 -- # cat 00:07:36.934 03:58:50 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:07:36.934 03:58:50 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:36.934 03:58:50 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:36.934 03:58:50 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:36.934 03:58:50 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:36.934 03:58:50 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:07:36.934 03:58:50 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:07:36.934 03:58:50 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:36.934 03:58:50 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:36.934 03:58:50 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:36.934 03:58:50 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:36.934 03:58:50 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:36.934 03:58:50 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:36.934 03:58:50 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:36.934 03:58:50 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:36.934 03:58:50 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:36.934 03:58:50 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:36.934 03:58:50 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:36.934 03:58:50 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:36.934 03:58:50 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:07:36.934 03:58:50 -- common/autotest_common.sh@252 -- # export valgrind= 00:07:36.934 03:58:50 -- common/autotest_common.sh@252 -- # valgrind= 00:07:36.934 03:58:50 -- common/autotest_common.sh@258 -- # uname -s 00:07:36.934 03:58:50 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:07:36.934 03:58:50 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:07:36.934 03:58:50 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:07:36.934 03:58:50 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:07:36.934 03:58:50 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:36.934 03:58:50 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:36.934 03:58:50 -- common/autotest_common.sh@268 -- # MAKE=make 00:07:36.934 03:58:50 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j112 00:07:36.934 03:58:50 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:07:36.934 03:58:50 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:07:36.934 03:58:50 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:36.934 03:58:50 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:36.934 03:58:50 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:36.934 03:58:50 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:36.934 03:58:50 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:07:36.934 03:58:50 -- common/autotest_common.sh@307 -- # [[ -z 158228 ]] 00:07:36.934 03:58:50 -- common/autotest_common.sh@307 -- # kill -0 158228 00:07:36.934 03:58:50 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:36.934 03:58:50 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:36.934 03:58:50 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:36.934 03:58:50 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:36.934 03:58:50 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:36.934 03:58:50 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:36.934 03:58:50 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:36.934 03:58:50 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:36.934 03:58:50 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.z1qJic 00:07:36.934 03:58:50 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:36.934 03:58:50 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:36.934 03:58:50 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:36.934 03:58:50 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.z1qJic/tests/target /tmp/spdk.z1qJic 00:07:36.934 03:58:50 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:36.934 03:58:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:36.934 03:58:50 -- common/autotest_common.sh@316 -- # df -T 00:07:36.934 03:58:50 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:36.934 03:58:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:36.934 03:58:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:36.934 03:58:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:36.934 03:58:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:36.934 03:58:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:36.934 03:58:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:36.934 03:58:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:36.934 03:58:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:36.934 03:58:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=995516416 00:07:36.934 03:58:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:36.934 03:58:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=4288913408 00:07:36.934 03:58:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:36.934 03:58:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:36.934 03:58:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:36.934 03:58:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=90539044864 00:07:36.934 03:58:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=95554768896 00:07:36.934 03:58:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=5015724032 00:07:36.934 03:58:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:36.934 03:58:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:36.934 03:58:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:36.934 03:58:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=47774007296 00:07:36.934 03:58:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=47777382400 00:07:36.934 03:58:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=3375104 00:07:36.934 03:58:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:36.934 03:58:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:36.935 03:58:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:36.935 03:58:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=19101528064 00:07:36.935 03:58:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=19110957056 00:07:36.935 03:58:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=9428992 00:07:36.935 03:58:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:36.935 03:58:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:36.935 03:58:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:36.935 03:58:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=47777214464 00:07:36.935 03:58:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=47777386496 00:07:36.935 03:58:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=172032 00:07:36.935 03:58:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:36.935 03:58:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:36.935 03:58:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:36.935 03:58:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=9555472384 00:07:36.935 03:58:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=9555476480 00:07:36.935 03:58:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:36.935 03:58:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:36.935 03:58:50 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:36.935 * Looking for test storage... 00:07:36.935 03:58:50 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:36.935 03:58:50 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:36.935 03:58:51 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:36.935 03:58:51 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:36.935 03:58:51 -- common/autotest_common.sh@361 -- # mount=/ 00:07:36.935 03:58:51 -- common/autotest_common.sh@363 -- # target_space=90539044864 00:07:36.935 03:58:51 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:36.935 03:58:51 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:36.935 03:58:51 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:36.935 03:58:51 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:36.935 03:58:51 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:36.935 03:58:51 -- common/autotest_common.sh@370 -- # new_size=7230316544 00:07:36.935 03:58:51 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:36.935 03:58:51 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:36.935 03:58:51 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:36.935 03:58:51 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:36.935 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:36.935 03:58:51 -- common/autotest_common.sh@378 -- # return 0 00:07:36.935 03:58:51 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:36.935 03:58:51 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:36.935 03:58:51 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:36.935 03:58:51 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:36.935 03:58:51 -- common/autotest_common.sh@1673 -- # true 00:07:36.935 03:58:51 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:36.935 03:58:51 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:36.935 03:58:51 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:36.935 03:58:51 -- common/autotest_common.sh@27 -- # exec 00:07:36.935 03:58:51 -- common/autotest_common.sh@29 -- # exec 00:07:36.935 03:58:51 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:36.935 03:58:51 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:36.935 03:58:51 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:36.935 03:58:51 -- common/autotest_common.sh@18 -- # set -x 00:07:36.935 03:58:51 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.935 03:58:51 -- nvmf/common.sh@7 -- # uname -s 00:07:36.935 03:58:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.935 03:58:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.935 03:58:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.935 03:58:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.935 03:58:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.935 03:58:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.935 03:58:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.935 03:58:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.935 03:58:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.935 03:58:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.935 03:58:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:36.935 03:58:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:36.935 03:58:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.935 03:58:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.935 03:58:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.935 03:58:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.935 03:58:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:36.935 03:58:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.935 03:58:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.935 03:58:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.935 03:58:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.935 03:58:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.935 03:58:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.935 03:58:51 -- paths/export.sh@5 -- # export PATH 00:07:36.935 03:58:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.935 03:58:51 -- nvmf/common.sh@47 -- # : 0 00:07:36.935 03:58:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.935 03:58:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.935 03:58:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.935 03:58:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.935 03:58:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.935 03:58:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.935 03:58:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.935 03:58:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.935 03:58:51 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:36.935 03:58:51 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:36.935 03:58:51 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:36.935 03:58:51 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:36.935 03:58:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.935 03:58:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:36.935 03:58:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:36.935 03:58:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:36.935 03:58:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.935 03:58:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.935 03:58:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.935 03:58:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:36.935 03:58:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:36.935 03:58:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:36.935 03:58:51 -- common/autotest_common.sh@10 -- # set +x 00:07:42.232 03:58:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:42.232 03:58:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.232 03:58:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.232 03:58:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.232 03:58:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.232 03:58:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.232 03:58:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.232 03:58:56 -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.232 03:58:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.232 03:58:56 -- nvmf/common.sh@296 -- # e810=() 00:07:42.232 03:58:56 -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.232 03:58:56 -- nvmf/common.sh@297 -- # x722=() 00:07:42.232 03:58:56 -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.232 03:58:56 -- nvmf/common.sh@298 -- # mlx=() 00:07:42.232 03:58:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.232 03:58:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.232 03:58:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.232 03:58:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.232 03:58:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.232 03:58:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.232 03:58:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.232 03:58:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.232 03:58:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.232 03:58:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.232 03:58:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.232 03:58:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.232 03:58:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.232 03:58:56 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:42.232 03:58:56 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:42.232 03:58:56 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:42.232 03:58:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.232 03:58:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.232 03:58:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:42.232 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:42.232 03:58:56 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:42.232 03:58:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.232 03:58:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:42.232 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:42.232 03:58:56 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:42.232 03:58:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.232 03:58:56 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.232 03:58:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.232 03:58:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:42.232 03:58:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.232 03:58:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:42.232 Found net devices under 0000:18:00.0: mlx_0_0 00:07:42.232 03:58:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.232 03:58:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.232 03:58:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.232 03:58:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:42.232 03:58:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.232 03:58:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:42.232 Found net devices under 0000:18:00.1: mlx_0_1 00:07:42.232 03:58:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.232 03:58:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:42.232 03:58:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:42.232 03:58:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:42.232 03:58:56 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:42.232 03:58:56 -- nvmf/common.sh@58 -- # uname 00:07:42.232 03:58:56 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:42.232 03:58:56 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:42.232 03:58:56 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:42.232 03:58:56 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:42.232 03:58:56 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:42.232 03:58:56 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:42.232 03:58:56 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:42.232 03:58:56 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:42.232 03:58:56 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:42.232 03:58:56 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:42.232 03:58:56 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:42.232 03:58:56 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:42.232 03:58:56 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:42.232 03:58:56 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:42.232 03:58:56 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:42.232 03:58:56 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:42.232 03:58:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:42.232 03:58:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.232 03:58:56 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:42.232 03:58:56 -- nvmf/common.sh@105 -- # continue 2 00:07:42.232 03:58:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:42.232 03:58:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.232 03:58:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.232 03:58:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:42.232 03:58:56 -- nvmf/common.sh@105 -- # continue 2 00:07:42.232 03:58:56 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:42.232 03:58:56 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:42.232 03:58:56 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:42.232 03:58:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:42.232 03:58:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:42.232 03:58:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:42.232 03:58:56 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:42.232 03:58:56 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:42.232 03:58:56 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:42.232 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:42.232 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:42.232 altname enp24s0f0np0 00:07:42.232 altname ens785f0np0 00:07:42.232 inet 192.168.100.8/24 scope global mlx_0_0 00:07:42.232 valid_lft forever preferred_lft forever 00:07:42.232 03:58:56 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:42.232 03:58:56 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:42.232 03:58:56 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:42.232 03:58:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:42.232 03:58:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:42.232 03:58:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:42.232 03:58:56 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:42.232 03:58:56 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:42.233 03:58:56 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:42.233 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:42.233 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:42.233 altname enp24s0f1np1 00:07:42.233 altname ens785f1np1 00:07:42.233 inet 192.168.100.9/24 scope global mlx_0_1 00:07:42.233 valid_lft forever preferred_lft forever 00:07:42.233 03:58:56 -- nvmf/common.sh@411 -- # return 0 00:07:42.233 03:58:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:42.233 03:58:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:42.233 03:58:56 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:42.233 03:58:56 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:42.233 03:58:56 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:42.233 03:58:56 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:42.233 03:58:56 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:42.233 03:58:56 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:42.233 03:58:56 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:42.233 03:58:56 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:42.233 03:58:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:42.233 03:58:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.233 03:58:56 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:42.233 03:58:56 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:42.233 03:58:56 -- nvmf/common.sh@105 -- # continue 2 00:07:42.233 03:58:56 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:42.233 03:58:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.233 03:58:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:42.233 03:58:56 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.233 03:58:56 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:42.233 03:58:56 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:42.233 03:58:56 -- nvmf/common.sh@105 -- # continue 2 00:07:42.233 03:58:56 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:42.233 03:58:56 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:42.233 03:58:56 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:42.233 03:58:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:42.233 03:58:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:42.233 03:58:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:42.233 03:58:56 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:42.233 03:58:56 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:42.233 03:58:56 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:42.233 03:58:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:42.233 03:58:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:42.233 03:58:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:42.233 03:58:56 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:42.233 192.168.100.9' 00:07:42.233 03:58:56 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:42.233 192.168.100.9' 00:07:42.233 03:58:56 -- nvmf/common.sh@446 -- # head -n 1 00:07:42.233 03:58:56 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:42.233 03:58:56 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:42.233 192.168.100.9' 00:07:42.233 03:58:56 -- nvmf/common.sh@447 -- # tail -n +2 00:07:42.233 03:58:56 -- nvmf/common.sh@447 -- # head -n 1 00:07:42.233 03:58:56 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:42.233 03:58:56 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:42.233 03:58:56 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:42.233 03:58:56 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:42.233 03:58:56 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:42.233 03:58:56 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:42.233 03:58:56 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:42.233 03:58:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:42.233 03:58:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.233 03:58:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.233 ************************************ 00:07:42.233 START TEST nvmf_filesystem_no_in_capsule 00:07:42.233 ************************************ 00:07:42.233 03:58:56 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:42.233 03:58:56 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:42.233 03:58:56 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:42.233 03:58:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:42.233 03:58:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:42.233 03:58:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.233 03:58:56 -- nvmf/common.sh@470 -- # nvmfpid=161379 00:07:42.233 03:58:56 -- nvmf/common.sh@471 -- # waitforlisten 161379 00:07:42.233 03:58:56 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.233 03:58:56 -- common/autotest_common.sh@817 -- # '[' -z 161379 ']' 00:07:42.233 03:58:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.233 03:58:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:42.233 03:58:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.233 03:58:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:42.233 03:58:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.233 [2024-04-19 03:58:56.634601] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:42.233 [2024-04-19 03:58:56.634640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.233 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.233 [2024-04-19 03:58:56.685600] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.493 [2024-04-19 03:58:56.763531] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.493 [2024-04-19 03:58:56.763563] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.493 [2024-04-19 03:58:56.763570] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.493 [2024-04-19 03:58:56.763575] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.493 [2024-04-19 03:58:56.763580] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.493 [2024-04-19 03:58:56.763624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.493 [2024-04-19 03:58:56.763637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.493 [2024-04-19 03:58:56.763652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.493 [2024-04-19 03:58:56.763653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.063 03:58:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:43.063 03:58:57 -- common/autotest_common.sh@850 -- # return 0 00:07:43.063 03:58:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:43.063 03:58:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:43.063 03:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 03:58:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.063 03:58:57 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:43.063 03:58:57 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:43.063 03:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.063 03:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.063 [2024-04-19 03:58:57.463112] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:43.063 [2024-04-19 03:58:57.482311] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cd06c0/0x1cd4bb0) succeed. 00:07:43.063 [2024-04-19 03:58:57.491601] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cd1cb0/0x1d16240) succeed. 00:07:43.063 03:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.063 03:58:57 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:43.063 03:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.063 03:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.323 Malloc1 00:07:43.323 03:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.323 03:58:57 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:43.323 03:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.323 03:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.323 03:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.323 03:58:57 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.323 03:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.323 03:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.323 03:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.323 03:58:57 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:43.323 03:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.323 03:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.323 [2024-04-19 03:58:57.721590] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:43.323 03:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.323 03:58:57 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:43.323 03:58:57 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:43.323 03:58:57 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:43.323 03:58:57 -- common/autotest_common.sh@1366 -- # local bs 00:07:43.323 03:58:57 -- common/autotest_common.sh@1367 -- # local nb 00:07:43.323 03:58:57 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:43.323 03:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.323 03:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.323 03:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.323 03:58:57 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:43.323 { 00:07:43.323 "name": "Malloc1", 00:07:43.323 "aliases": [ 00:07:43.323 "f11e1e88-0ef6-4e4b-aa57-f24e6ca5a8b7" 00:07:43.323 ], 00:07:43.324 "product_name": "Malloc disk", 00:07:43.324 "block_size": 512, 00:07:43.324 "num_blocks": 1048576, 00:07:43.324 "uuid": "f11e1e88-0ef6-4e4b-aa57-f24e6ca5a8b7", 00:07:43.324 "assigned_rate_limits": { 00:07:43.324 "rw_ios_per_sec": 0, 00:07:43.324 "rw_mbytes_per_sec": 0, 00:07:43.324 "r_mbytes_per_sec": 0, 00:07:43.324 "w_mbytes_per_sec": 0 00:07:43.324 }, 00:07:43.324 "claimed": true, 00:07:43.324 "claim_type": "exclusive_write", 00:07:43.324 "zoned": false, 00:07:43.324 "supported_io_types": { 00:07:43.324 "read": true, 00:07:43.324 "write": true, 00:07:43.324 "unmap": true, 00:07:43.324 "write_zeroes": true, 00:07:43.324 "flush": true, 00:07:43.324 "reset": true, 00:07:43.324 "compare": false, 00:07:43.324 "compare_and_write": false, 00:07:43.324 "abort": true, 00:07:43.324 "nvme_admin": false, 00:07:43.324 "nvme_io": false 00:07:43.324 }, 00:07:43.324 "memory_domains": [ 00:07:43.324 { 00:07:43.324 "dma_device_id": "system", 00:07:43.324 "dma_device_type": 1 00:07:43.324 }, 00:07:43.324 { 00:07:43.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.324 "dma_device_type": 2 00:07:43.324 } 00:07:43.324 ], 00:07:43.324 "driver_specific": {} 00:07:43.324 } 00:07:43.324 ]' 00:07:43.324 03:58:57 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:43.324 03:58:57 -- common/autotest_common.sh@1369 -- # bs=512 00:07:43.324 03:58:57 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:43.324 03:58:57 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:43.324 03:58:57 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:43.324 03:58:57 -- common/autotest_common.sh@1374 -- # echo 512 00:07:43.324 03:58:57 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:43.324 03:58:57 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:44.705 03:58:58 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:44.705 03:58:58 -- common/autotest_common.sh@1184 -- # local i=0 00:07:44.705 03:58:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:44.705 03:58:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:44.705 03:58:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:46.613 03:59:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:46.613 03:59:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:46.613 03:59:00 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:46.613 03:59:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:46.613 03:59:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:46.613 03:59:00 -- common/autotest_common.sh@1194 -- # return 0 00:07:46.613 03:59:00 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:46.613 03:59:00 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:46.613 03:59:00 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:46.613 03:59:00 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:46.613 03:59:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:46.613 03:59:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:46.613 03:59:00 -- setup/common.sh@80 -- # echo 536870912 00:07:46.613 03:59:00 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:46.613 03:59:00 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:46.613 03:59:00 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:46.613 03:59:00 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:46.613 03:59:00 -- target/filesystem.sh@69 -- # partprobe 00:07:46.613 03:59:00 -- target/filesystem.sh@70 -- # sleep 1 00:07:47.551 03:59:01 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:47.551 03:59:01 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:47.551 03:59:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:47.551 03:59:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.551 03:59:01 -- common/autotest_common.sh@10 -- # set +x 00:07:47.811 ************************************ 00:07:47.811 START TEST filesystem_ext4 00:07:47.811 ************************************ 00:07:47.811 03:59:02 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:47.811 03:59:02 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:47.811 03:59:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.811 03:59:02 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:47.812 03:59:02 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:47.812 03:59:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:47.812 03:59:02 -- common/autotest_common.sh@914 -- # local i=0 00:07:47.812 03:59:02 -- common/autotest_common.sh@915 -- # local force 00:07:47.812 03:59:02 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:47.812 03:59:02 -- common/autotest_common.sh@918 -- # force=-F 00:07:47.812 03:59:02 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:47.812 mke2fs 1.46.5 (30-Dec-2021) 00:07:47.812 Discarding device blocks: 0/522240 done 00:07:47.812 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:47.812 Filesystem UUID: f0869bad-3fe7-47e8-ab07-bf0156487afa 00:07:47.812 Superblock backups stored on blocks: 00:07:47.812 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:47.812 00:07:47.812 Allocating group tables: 0/64 done 00:07:47.812 Writing inode tables: 0/64 done 00:07:47.812 Creating journal (8192 blocks): done 00:07:47.812 Writing superblocks and filesystem accounting information: 0/64 done 00:07:47.812 00:07:47.812 03:59:02 -- common/autotest_common.sh@931 -- # return 0 00:07:47.812 03:59:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.812 03:59:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.812 03:59:02 -- target/filesystem.sh@25 -- # sync 00:07:47.812 03:59:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.812 03:59:02 -- target/filesystem.sh@27 -- # sync 00:07:47.812 03:59:02 -- target/filesystem.sh@29 -- # i=0 00:07:47.812 03:59:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.812 03:59:02 -- target/filesystem.sh@37 -- # kill -0 161379 00:07:47.812 03:59:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.812 03:59:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.812 03:59:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.812 03:59:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.812 00:07:47.812 real 0m0.164s 00:07:47.812 user 0m0.016s 00:07:47.812 sys 0m0.063s 00:07:47.812 03:59:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.812 03:59:02 -- common/autotest_common.sh@10 -- # set +x 00:07:47.812 ************************************ 00:07:47.812 END TEST filesystem_ext4 00:07:47.812 ************************************ 00:07:47.812 03:59:02 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:47.812 03:59:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:47.812 03:59:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.812 03:59:02 -- common/autotest_common.sh@10 -- # set +x 00:07:48.071 ************************************ 00:07:48.071 START TEST filesystem_btrfs 00:07:48.071 ************************************ 00:07:48.071 03:59:02 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:48.071 03:59:02 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:48.071 03:59:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.071 03:59:02 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:48.071 03:59:02 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:48.071 03:59:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:48.071 03:59:02 -- common/autotest_common.sh@914 -- # local i=0 00:07:48.071 03:59:02 -- common/autotest_common.sh@915 -- # local force 00:07:48.071 03:59:02 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:48.071 03:59:02 -- common/autotest_common.sh@920 -- # force=-f 00:07:48.071 03:59:02 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:48.071 btrfs-progs v6.6.2 00:07:48.071 See https://btrfs.readthedocs.io for more information. 00:07:48.071 00:07:48.071 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:48.071 NOTE: several default settings have changed in version 5.15, please make sure 00:07:48.071 this does not affect your deployments: 00:07:48.071 - DUP for metadata (-m dup) 00:07:48.071 - enabled no-holes (-O no-holes) 00:07:48.071 - enabled free-space-tree (-R free-space-tree) 00:07:48.071 00:07:48.071 Label: (null) 00:07:48.071 UUID: 820b186a-4c05-4825-8715-054808d9b5d7 00:07:48.071 Node size: 16384 00:07:48.072 Sector size: 4096 00:07:48.072 Filesystem size: 510.00MiB 00:07:48.072 Block group profiles: 00:07:48.072 Data: single 8.00MiB 00:07:48.072 Metadata: DUP 32.00MiB 00:07:48.072 System: DUP 8.00MiB 00:07:48.072 SSD detected: yes 00:07:48.072 Zoned device: no 00:07:48.072 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:48.072 Runtime features: free-space-tree 00:07:48.072 Checksum: crc32c 00:07:48.072 Number of devices: 1 00:07:48.072 Devices: 00:07:48.072 ID SIZE PATH 00:07:48.072 1 510.00MiB /dev/nvme0n1p1 00:07:48.072 00:07:48.072 03:59:02 -- common/autotest_common.sh@931 -- # return 0 00:07:48.072 03:59:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.331 03:59:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.331 03:59:02 -- target/filesystem.sh@25 -- # sync 00:07:48.331 03:59:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.331 03:59:02 -- target/filesystem.sh@27 -- # sync 00:07:48.331 03:59:02 -- target/filesystem.sh@29 -- # i=0 00:07:48.331 03:59:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.331 03:59:02 -- target/filesystem.sh@37 -- # kill -0 161379 00:07:48.331 03:59:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.331 03:59:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.331 03:59:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.331 03:59:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.331 00:07:48.331 real 0m0.282s 00:07:48.331 user 0m0.033s 00:07:48.331 sys 0m0.150s 00:07:48.331 03:59:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.331 03:59:02 -- common/autotest_common.sh@10 -- # set +x 00:07:48.331 ************************************ 00:07:48.331 END TEST filesystem_btrfs 00:07:48.331 ************************************ 00:07:48.331 03:59:02 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:48.331 03:59:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:48.331 03:59:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.332 03:59:02 -- common/autotest_common.sh@10 -- # set +x 00:07:48.591 ************************************ 00:07:48.591 START TEST filesystem_xfs 00:07:48.591 ************************************ 00:07:48.591 03:59:02 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:48.591 03:59:02 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:48.591 03:59:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.591 03:59:02 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:48.591 03:59:02 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:48.591 03:59:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:48.591 03:59:02 -- common/autotest_common.sh@914 -- # local i=0 00:07:48.591 03:59:02 -- common/autotest_common.sh@915 -- # local force 00:07:48.592 03:59:02 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:48.592 03:59:02 -- common/autotest_common.sh@920 -- # force=-f 00:07:48.592 03:59:02 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:48.592 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:48.592 = sectsz=512 attr=2, projid32bit=1 00:07:48.592 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:48.592 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:48.592 data = bsize=4096 blocks=130560, imaxpct=25 00:07:48.592 = sunit=0 swidth=0 blks 00:07:48.592 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:48.592 log =internal log bsize=4096 blocks=16384, version=2 00:07:48.592 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:48.592 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:48.592 Discarding blocks...Done. 00:07:48.592 03:59:03 -- common/autotest_common.sh@931 -- # return 0 00:07:48.592 03:59:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.160 03:59:03 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.160 03:59:03 -- target/filesystem.sh@25 -- # sync 00:07:49.160 03:59:03 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.160 03:59:03 -- target/filesystem.sh@27 -- # sync 00:07:49.160 03:59:03 -- target/filesystem.sh@29 -- # i=0 00:07:49.160 03:59:03 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.160 03:59:03 -- target/filesystem.sh@37 -- # kill -0 161379 00:07:49.160 03:59:03 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.160 03:59:03 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.160 03:59:03 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.160 03:59:03 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.160 00:07:49.160 real 0m0.613s 00:07:49.160 user 0m0.029s 00:07:49.160 sys 0m0.097s 00:07:49.160 03:59:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.160 03:59:03 -- common/autotest_common.sh@10 -- # set +x 00:07:49.160 ************************************ 00:07:49.160 END TEST filesystem_xfs 00:07:49.160 ************************************ 00:07:49.160 03:59:03 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:49.160 03:59:03 -- target/filesystem.sh@93 -- # sync 00:07:49.160 03:59:03 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:50.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.107 03:59:04 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:50.107 03:59:04 -- common/autotest_common.sh@1205 -- # local i=0 00:07:50.107 03:59:04 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:50.107 03:59:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.107 03:59:04 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:50.107 03:59:04 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.107 03:59:04 -- common/autotest_common.sh@1217 -- # return 0 00:07:50.107 03:59:04 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.107 03:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.107 03:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.107 03:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.107 03:59:04 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:50.107 03:59:04 -- target/filesystem.sh@101 -- # killprocess 161379 00:07:50.107 03:59:04 -- common/autotest_common.sh@936 -- # '[' -z 161379 ']' 00:07:50.107 03:59:04 -- common/autotest_common.sh@940 -- # kill -0 161379 00:07:50.107 03:59:04 -- common/autotest_common.sh@941 -- # uname 00:07:50.107 03:59:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:50.107 03:59:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 161379 00:07:50.107 03:59:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:50.107 03:59:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:50.107 03:59:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 161379' 00:07:50.107 killing process with pid 161379 00:07:50.107 03:59:04 -- common/autotest_common.sh@955 -- # kill 161379 00:07:50.107 03:59:04 -- common/autotest_common.sh@960 -- # wait 161379 00:07:50.676 03:59:05 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:50.676 00:07:50.676 real 0m8.415s 00:07:50.676 user 0m32.973s 00:07:50.676 sys 0m1.219s 00:07:50.676 03:59:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:50.676 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.676 ************************************ 00:07:50.676 END TEST nvmf_filesystem_no_in_capsule 00:07:50.676 ************************************ 00:07:50.676 03:59:05 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:50.676 03:59:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:50.676 03:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.676 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.676 ************************************ 00:07:50.676 START TEST nvmf_filesystem_in_capsule 00:07:50.676 ************************************ 00:07:50.676 03:59:05 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:50.676 03:59:05 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:50.676 03:59:05 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:50.676 03:59:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:50.676 03:59:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:50.676 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.676 03:59:05 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.676 03:59:05 -- nvmf/common.sh@470 -- # nvmfpid=163188 00:07:50.676 03:59:05 -- nvmf/common.sh@471 -- # waitforlisten 163188 00:07:50.676 03:59:05 -- common/autotest_common.sh@817 -- # '[' -z 163188 ']' 00:07:50.676 03:59:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.676 03:59:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:50.676 03:59:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.676 03:59:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:50.676 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.676 [2024-04-19 03:59:05.191869] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:50.676 [2024-04-19 03:59:05.191902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.936 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.936 [2024-04-19 03:59:05.244511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.936 [2024-04-19 03:59:05.317897] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.936 [2024-04-19 03:59:05.317934] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.936 [2024-04-19 03:59:05.317941] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.936 [2024-04-19 03:59:05.317946] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.936 [2024-04-19 03:59:05.317951] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.936 [2024-04-19 03:59:05.317988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.936 [2024-04-19 03:59:05.318079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.936 [2024-04-19 03:59:05.318097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.936 [2024-04-19 03:59:05.318098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.506 03:59:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:51.506 03:59:05 -- common/autotest_common.sh@850 -- # return 0 00:07:51.506 03:59:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:51.506 03:59:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:51.506 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:51.506 03:59:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.506 03:59:06 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:51.506 03:59:06 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:07:51.506 03:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.506 03:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:51.766 [2024-04-19 03:59:06.039052] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6cb6c0/0x6cfbb0) succeed. 00:07:51.766 [2024-04-19 03:59:06.048304] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6cccb0/0x711240) succeed. 00:07:51.766 03:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.766 03:59:06 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:51.766 03:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.766 03:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:51.766 Malloc1 00:07:51.766 03:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.766 03:59:06 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:51.766 03:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.766 03:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:51.766 03:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.766 03:59:06 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:51.766 03:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.766 03:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:51.766 03:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.766 03:59:06 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:52.026 03:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:52.026 03:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:52.026 [2024-04-19 03:59:06.298298] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:52.026 03:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:52.026 03:59:06 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:52.026 03:59:06 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:52.026 03:59:06 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:52.026 03:59:06 -- common/autotest_common.sh@1366 -- # local bs 00:07:52.026 03:59:06 -- common/autotest_common.sh@1367 -- # local nb 00:07:52.026 03:59:06 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:52.026 03:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:52.026 03:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:52.026 03:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:52.026 03:59:06 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:52.026 { 00:07:52.026 "name": "Malloc1", 00:07:52.026 "aliases": [ 00:07:52.026 "997605c1-94d3-4160-af8d-d5a554bc31ee" 00:07:52.026 ], 00:07:52.026 "product_name": "Malloc disk", 00:07:52.026 "block_size": 512, 00:07:52.026 "num_blocks": 1048576, 00:07:52.026 "uuid": "997605c1-94d3-4160-af8d-d5a554bc31ee", 00:07:52.026 "assigned_rate_limits": { 00:07:52.026 "rw_ios_per_sec": 0, 00:07:52.026 "rw_mbytes_per_sec": 0, 00:07:52.026 "r_mbytes_per_sec": 0, 00:07:52.026 "w_mbytes_per_sec": 0 00:07:52.026 }, 00:07:52.026 "claimed": true, 00:07:52.026 "claim_type": "exclusive_write", 00:07:52.026 "zoned": false, 00:07:52.026 "supported_io_types": { 00:07:52.026 "read": true, 00:07:52.026 "write": true, 00:07:52.026 "unmap": true, 00:07:52.026 "write_zeroes": true, 00:07:52.026 "flush": true, 00:07:52.026 "reset": true, 00:07:52.026 "compare": false, 00:07:52.026 "compare_and_write": false, 00:07:52.026 "abort": true, 00:07:52.026 "nvme_admin": false, 00:07:52.026 "nvme_io": false 00:07:52.026 }, 00:07:52.026 "memory_domains": [ 00:07:52.026 { 00:07:52.026 "dma_device_id": "system", 00:07:52.026 "dma_device_type": 1 00:07:52.026 }, 00:07:52.026 { 00:07:52.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.026 "dma_device_type": 2 00:07:52.026 } 00:07:52.026 ], 00:07:52.026 "driver_specific": {} 00:07:52.026 } 00:07:52.026 ]' 00:07:52.026 03:59:06 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:52.026 03:59:06 -- common/autotest_common.sh@1369 -- # bs=512 00:07:52.026 03:59:06 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:52.026 03:59:06 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:52.026 03:59:06 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:52.026 03:59:06 -- common/autotest_common.sh@1374 -- # echo 512 00:07:52.026 03:59:06 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:52.026 03:59:06 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:52.967 03:59:07 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:52.967 03:59:07 -- common/autotest_common.sh@1184 -- # local i=0 00:07:52.967 03:59:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:52.967 03:59:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:52.967 03:59:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:54.876 03:59:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:54.876 03:59:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:54.876 03:59:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:54.876 03:59:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:54.876 03:59:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:54.876 03:59:09 -- common/autotest_common.sh@1194 -- # return 0 00:07:54.876 03:59:09 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:54.876 03:59:09 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:54.876 03:59:09 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:54.876 03:59:09 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:54.876 03:59:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:54.876 03:59:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:54.876 03:59:09 -- setup/common.sh@80 -- # echo 536870912 00:07:54.876 03:59:09 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:54.876 03:59:09 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:54.876 03:59:09 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:54.876 03:59:09 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:55.137 03:59:09 -- target/filesystem.sh@69 -- # partprobe 00:07:55.137 03:59:09 -- target/filesystem.sh@70 -- # sleep 1 00:07:56.076 03:59:10 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:56.076 03:59:10 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:56.076 03:59:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:56.076 03:59:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.076 03:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:56.337 ************************************ 00:07:56.337 START TEST filesystem_in_capsule_ext4 00:07:56.337 ************************************ 00:07:56.337 03:59:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:56.337 03:59:10 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:56.337 03:59:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.337 03:59:10 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:56.337 03:59:10 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:56.337 03:59:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:56.337 03:59:10 -- common/autotest_common.sh@914 -- # local i=0 00:07:56.337 03:59:10 -- common/autotest_common.sh@915 -- # local force 00:07:56.337 03:59:10 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:56.337 03:59:10 -- common/autotest_common.sh@918 -- # force=-F 00:07:56.337 03:59:10 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:56.337 mke2fs 1.46.5 (30-Dec-2021) 00:07:56.337 Discarding device blocks: 0/522240 done 00:07:56.337 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:56.337 Filesystem UUID: b698c683-4e5b-4187-a629-6e48d3e22dac 00:07:56.337 Superblock backups stored on blocks: 00:07:56.337 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:56.337 00:07:56.337 Allocating group tables: 0/64 done 00:07:56.337 Writing inode tables: 0/64 done 00:07:56.337 Creating journal (8192 blocks): done 00:07:56.337 Writing superblocks and filesystem accounting information: 0/64 done 00:07:56.337 00:07:56.337 03:59:10 -- common/autotest_common.sh@931 -- # return 0 00:07:56.337 03:59:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.337 03:59:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.337 03:59:10 -- target/filesystem.sh@25 -- # sync 00:07:56.337 03:59:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.337 03:59:10 -- target/filesystem.sh@27 -- # sync 00:07:56.337 03:59:10 -- target/filesystem.sh@29 -- # i=0 00:07:56.337 03:59:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.337 03:59:10 -- target/filesystem.sh@37 -- # kill -0 163188 00:07:56.337 03:59:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.337 03:59:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.337 03:59:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.337 03:59:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.337 00:07:56.337 real 0m0.166s 00:07:56.337 user 0m0.022s 00:07:56.337 sys 0m0.056s 00:07:56.337 03:59:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.337 03:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:56.337 ************************************ 00:07:56.337 END TEST filesystem_in_capsule_ext4 00:07:56.337 ************************************ 00:07:56.337 03:59:10 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:56.337 03:59:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:56.337 03:59:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.337 03:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:56.597 ************************************ 00:07:56.597 START TEST filesystem_in_capsule_btrfs 00:07:56.597 ************************************ 00:07:56.597 03:59:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:56.597 03:59:10 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:56.597 03:59:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.597 03:59:10 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:56.597 03:59:10 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:56.597 03:59:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:56.597 03:59:10 -- common/autotest_common.sh@914 -- # local i=0 00:07:56.597 03:59:10 -- common/autotest_common.sh@915 -- # local force 00:07:56.598 03:59:10 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:56.598 03:59:10 -- common/autotest_common.sh@920 -- # force=-f 00:07:56.598 03:59:10 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:56.598 btrfs-progs v6.6.2 00:07:56.598 See https://btrfs.readthedocs.io for more information. 00:07:56.598 00:07:56.598 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:56.598 NOTE: several default settings have changed in version 5.15, please make sure 00:07:56.598 this does not affect your deployments: 00:07:56.598 - DUP for metadata (-m dup) 00:07:56.598 - enabled no-holes (-O no-holes) 00:07:56.598 - enabled free-space-tree (-R free-space-tree) 00:07:56.598 00:07:56.598 Label: (null) 00:07:56.598 UUID: 4570c436-83c0-4bd8-8977-2391cc929e28 00:07:56.598 Node size: 16384 00:07:56.598 Sector size: 4096 00:07:56.598 Filesystem size: 510.00MiB 00:07:56.598 Block group profiles: 00:07:56.598 Data: single 8.00MiB 00:07:56.598 Metadata: DUP 32.00MiB 00:07:56.598 System: DUP 8.00MiB 00:07:56.598 SSD detected: yes 00:07:56.598 Zoned device: no 00:07:56.598 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:56.598 Runtime features: free-space-tree 00:07:56.598 Checksum: crc32c 00:07:56.598 Number of devices: 1 00:07:56.598 Devices: 00:07:56.598 ID SIZE PATH 00:07:56.598 1 510.00MiB /dev/nvme0n1p1 00:07:56.598 00:07:56.598 03:59:11 -- common/autotest_common.sh@931 -- # return 0 00:07:56.598 03:59:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.864 03:59:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.864 03:59:11 -- target/filesystem.sh@25 -- # sync 00:07:56.864 03:59:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.864 03:59:11 -- target/filesystem.sh@27 -- # sync 00:07:56.865 03:59:11 -- target/filesystem.sh@29 -- # i=0 00:07:56.865 03:59:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.865 03:59:11 -- target/filesystem.sh@37 -- # kill -0 163188 00:07:56.865 03:59:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.865 03:59:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.865 03:59:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.865 03:59:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.865 00:07:56.865 real 0m0.238s 00:07:56.865 user 0m0.031s 00:07:56.865 sys 0m0.113s 00:07:56.865 03:59:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.865 03:59:11 -- common/autotest_common.sh@10 -- # set +x 00:07:56.865 ************************************ 00:07:56.865 END TEST filesystem_in_capsule_btrfs 00:07:56.865 ************************************ 00:07:56.865 03:59:11 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:56.865 03:59:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:56.865 03:59:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.865 03:59:11 -- common/autotest_common.sh@10 -- # set +x 00:07:56.865 ************************************ 00:07:56.865 START TEST filesystem_in_capsule_xfs 00:07:56.865 ************************************ 00:07:56.865 03:59:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:56.865 03:59:11 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:56.865 03:59:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.865 03:59:11 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:56.865 03:59:11 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:56.865 03:59:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:56.865 03:59:11 -- common/autotest_common.sh@914 -- # local i=0 00:07:56.865 03:59:11 -- common/autotest_common.sh@915 -- # local force 00:07:56.865 03:59:11 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:56.865 03:59:11 -- common/autotest_common.sh@920 -- # force=-f 00:07:56.865 03:59:11 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:57.126 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:57.126 = sectsz=512 attr=2, projid32bit=1 00:07:57.126 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:57.126 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:57.126 data = bsize=4096 blocks=130560, imaxpct=25 00:07:57.126 = sunit=0 swidth=0 blks 00:07:57.126 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:57.126 log =internal log bsize=4096 blocks=16384, version=2 00:07:57.126 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:57.126 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:57.126 Discarding blocks...Done. 00:07:57.126 03:59:11 -- common/autotest_common.sh@931 -- # return 0 00:07:57.126 03:59:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.126 03:59:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.126 03:59:11 -- target/filesystem.sh@25 -- # sync 00:07:57.126 03:59:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.126 03:59:11 -- target/filesystem.sh@27 -- # sync 00:07:57.126 03:59:11 -- target/filesystem.sh@29 -- # i=0 00:07:57.126 03:59:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.126 03:59:11 -- target/filesystem.sh@37 -- # kill -0 163188 00:07:57.126 03:59:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.126 03:59:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.126 03:59:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.126 03:59:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.126 00:07:57.126 real 0m0.185s 00:07:57.126 user 0m0.022s 00:07:57.126 sys 0m0.061s 00:07:57.126 03:59:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.126 03:59:11 -- common/autotest_common.sh@10 -- # set +x 00:07:57.126 ************************************ 00:07:57.126 END TEST filesystem_in_capsule_xfs 00:07:57.126 ************************************ 00:07:57.126 03:59:11 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:57.126 03:59:11 -- target/filesystem.sh@93 -- # sync 00:07:57.126 03:59:11 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:58.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.066 03:59:12 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:58.066 03:59:12 -- common/autotest_common.sh@1205 -- # local i=0 00:07:58.066 03:59:12 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:58.066 03:59:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.066 03:59:12 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:58.066 03:59:12 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.326 03:59:12 -- common/autotest_common.sh@1217 -- # return 0 00:07:58.326 03:59:12 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.326 03:59:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.326 03:59:12 -- common/autotest_common.sh@10 -- # set +x 00:07:58.326 03:59:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.326 03:59:12 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:58.326 03:59:12 -- target/filesystem.sh@101 -- # killprocess 163188 00:07:58.326 03:59:12 -- common/autotest_common.sh@936 -- # '[' -z 163188 ']' 00:07:58.326 03:59:12 -- common/autotest_common.sh@940 -- # kill -0 163188 00:07:58.326 03:59:12 -- common/autotest_common.sh@941 -- # uname 00:07:58.326 03:59:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:58.326 03:59:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 163188 00:07:58.326 03:59:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:58.326 03:59:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:58.326 03:59:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 163188' 00:07:58.326 killing process with pid 163188 00:07:58.326 03:59:12 -- common/autotest_common.sh@955 -- # kill 163188 00:07:58.326 03:59:12 -- common/autotest_common.sh@960 -- # wait 163188 00:07:58.586 03:59:13 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:58.586 00:07:58.586 real 0m7.919s 00:07:58.586 user 0m31.003s 00:07:58.586 sys 0m1.148s 00:07:58.586 03:59:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:58.586 03:59:13 -- common/autotest_common.sh@10 -- # set +x 00:07:58.586 ************************************ 00:07:58.586 END TEST nvmf_filesystem_in_capsule 00:07:58.586 ************************************ 00:07:58.845 03:59:13 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:58.845 03:59:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:58.845 03:59:13 -- nvmf/common.sh@117 -- # sync 00:07:58.845 03:59:13 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:58.845 03:59:13 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:58.845 03:59:13 -- nvmf/common.sh@120 -- # set +e 00:07:58.845 03:59:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.845 03:59:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:58.845 rmmod nvme_rdma 00:07:58.845 rmmod nvme_fabrics 00:07:58.845 03:59:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.845 03:59:13 -- nvmf/common.sh@124 -- # set -e 00:07:58.845 03:59:13 -- nvmf/common.sh@125 -- # return 0 00:07:58.845 03:59:13 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:58.845 03:59:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:58.845 03:59:13 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:58.845 00:07:58.845 real 0m22.362s 00:07:58.845 user 1m5.893s 00:07:58.845 sys 0m6.592s 00:07:58.845 03:59:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:58.845 03:59:13 -- common/autotest_common.sh@10 -- # set +x 00:07:58.845 ************************************ 00:07:58.845 END TEST nvmf_filesystem 00:07:58.845 ************************************ 00:07:58.845 03:59:13 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:58.845 03:59:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:58.845 03:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.845 03:59:13 -- common/autotest_common.sh@10 -- # set +x 00:07:58.845 ************************************ 00:07:58.845 START TEST nvmf_discovery 00:07:58.845 ************************************ 00:07:58.845 03:59:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:59.105 * Looking for test storage... 00:07:59.105 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:59.105 03:59:13 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.105 03:59:13 -- nvmf/common.sh@7 -- # uname -s 00:07:59.105 03:59:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.105 03:59:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.105 03:59:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.105 03:59:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.105 03:59:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.105 03:59:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.105 03:59:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.105 03:59:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.105 03:59:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.105 03:59:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.105 03:59:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:59.105 03:59:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:59.105 03:59:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.105 03:59:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.105 03:59:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.105 03:59:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.105 03:59:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:59.105 03:59:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.105 03:59:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.105 03:59:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.105 03:59:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.105 03:59:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.105 03:59:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.105 03:59:13 -- paths/export.sh@5 -- # export PATH 00:07:59.105 03:59:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.105 03:59:13 -- nvmf/common.sh@47 -- # : 0 00:07:59.105 03:59:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.105 03:59:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.105 03:59:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.105 03:59:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.105 03:59:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.105 03:59:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.105 03:59:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.105 03:59:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.105 03:59:13 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:59.105 03:59:13 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:59.105 03:59:13 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:59.105 03:59:13 -- target/discovery.sh@15 -- # hash nvme 00:07:59.105 03:59:13 -- target/discovery.sh@20 -- # nvmftestinit 00:07:59.105 03:59:13 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:59.105 03:59:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.105 03:59:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:59.105 03:59:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:59.105 03:59:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:59.105 03:59:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.105 03:59:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.105 03:59:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.105 03:59:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:59.105 03:59:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:59.105 03:59:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.105 03:59:13 -- common/autotest_common.sh@10 -- # set +x 00:08:04.389 03:59:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:04.389 03:59:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:04.389 03:59:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:04.389 03:59:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:04.389 03:59:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:04.389 03:59:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:04.389 03:59:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:04.389 03:59:18 -- nvmf/common.sh@295 -- # net_devs=() 00:08:04.389 03:59:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:04.389 03:59:18 -- nvmf/common.sh@296 -- # e810=() 00:08:04.389 03:59:18 -- nvmf/common.sh@296 -- # local -ga e810 00:08:04.389 03:59:18 -- nvmf/common.sh@297 -- # x722=() 00:08:04.389 03:59:18 -- nvmf/common.sh@297 -- # local -ga x722 00:08:04.389 03:59:18 -- nvmf/common.sh@298 -- # mlx=() 00:08:04.389 03:59:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:04.389 03:59:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.389 03:59:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.389 03:59:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.389 03:59:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.389 03:59:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.389 03:59:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.389 03:59:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.389 03:59:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.389 03:59:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.389 03:59:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.389 03:59:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.389 03:59:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:04.389 03:59:18 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:04.389 03:59:18 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:04.389 03:59:18 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:04.389 03:59:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:04.389 03:59:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.389 03:59:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:04.389 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:04.389 03:59:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:04.389 03:59:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.389 03:59:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:04.389 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:04.389 03:59:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:04.389 03:59:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:04.389 03:59:18 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.389 03:59:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.389 03:59:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:04.389 03:59:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.389 03:59:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:04.389 Found net devices under 0000:18:00.0: mlx_0_0 00:08:04.389 03:59:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.389 03:59:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.389 03:59:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.389 03:59:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:04.389 03:59:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.389 03:59:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:04.389 Found net devices under 0000:18:00.1: mlx_0_1 00:08:04.389 03:59:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.389 03:59:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:04.389 03:59:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:04.389 03:59:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:04.389 03:59:18 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:04.389 03:59:18 -- nvmf/common.sh@58 -- # uname 00:08:04.389 03:59:18 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:04.389 03:59:18 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:04.389 03:59:18 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:04.389 03:59:18 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:04.389 03:59:18 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:04.389 03:59:18 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:04.389 03:59:18 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:04.389 03:59:18 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:04.389 03:59:18 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:04.389 03:59:18 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:04.389 03:59:18 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:04.389 03:59:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:04.389 03:59:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:04.389 03:59:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:04.389 03:59:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:04.389 03:59:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:04.389 03:59:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:04.389 03:59:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.389 03:59:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:04.389 03:59:18 -- nvmf/common.sh@105 -- # continue 2 00:08:04.389 03:59:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:04.389 03:59:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.389 03:59:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.389 03:59:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:04.389 03:59:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:04.389 03:59:18 -- nvmf/common.sh@105 -- # continue 2 00:08:04.389 03:59:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:04.389 03:59:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:04.389 03:59:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:04.389 03:59:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:04.389 03:59:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:04.389 03:59:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:04.389 03:59:18 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:04.390 03:59:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:04.390 03:59:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:04.390 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:04.390 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:04.390 altname enp24s0f0np0 00:08:04.390 altname ens785f0np0 00:08:04.390 inet 192.168.100.8/24 scope global mlx_0_0 00:08:04.390 valid_lft forever preferred_lft forever 00:08:04.390 03:59:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:04.390 03:59:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:04.390 03:59:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:04.390 03:59:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:04.390 03:59:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:04.390 03:59:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:04.390 03:59:18 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:04.390 03:59:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:04.390 03:59:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:04.390 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:04.390 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:04.390 altname enp24s0f1np1 00:08:04.390 altname ens785f1np1 00:08:04.390 inet 192.168.100.9/24 scope global mlx_0_1 00:08:04.390 valid_lft forever preferred_lft forever 00:08:04.390 03:59:18 -- nvmf/common.sh@411 -- # return 0 00:08:04.390 03:59:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:04.390 03:59:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:04.390 03:59:18 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:04.390 03:59:18 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:04.390 03:59:18 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:04.390 03:59:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:04.390 03:59:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:04.390 03:59:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:04.390 03:59:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:04.390 03:59:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:04.390 03:59:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:04.390 03:59:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.390 03:59:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:04.390 03:59:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:04.390 03:59:18 -- nvmf/common.sh@105 -- # continue 2 00:08:04.390 03:59:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:04.390 03:59:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.390 03:59:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:04.390 03:59:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.390 03:59:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:04.390 03:59:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:04.390 03:59:18 -- nvmf/common.sh@105 -- # continue 2 00:08:04.390 03:59:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:04.390 03:59:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:04.390 03:59:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:04.390 03:59:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:04.390 03:59:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:04.390 03:59:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:04.390 03:59:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:04.390 03:59:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:04.390 03:59:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:04.390 03:59:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:04.390 03:59:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:04.390 03:59:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:04.390 03:59:18 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:04.390 192.168.100.9' 00:08:04.390 03:59:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:04.390 192.168.100.9' 00:08:04.390 03:59:18 -- nvmf/common.sh@446 -- # head -n 1 00:08:04.390 03:59:18 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:04.390 03:59:18 -- nvmf/common.sh@447 -- # head -n 1 00:08:04.390 03:59:18 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:04.390 192.168.100.9' 00:08:04.390 03:59:18 -- nvmf/common.sh@447 -- # tail -n +2 00:08:04.390 03:59:18 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:04.390 03:59:18 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:04.390 03:59:18 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:04.390 03:59:18 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:04.390 03:59:18 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:04.390 03:59:18 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:04.390 03:59:18 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:04.390 03:59:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:04.390 03:59:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:04.390 03:59:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.390 03:59:18 -- nvmf/common.sh@470 -- # nvmfpid=168097 00:08:04.390 03:59:18 -- nvmf/common.sh@471 -- # waitforlisten 168097 00:08:04.390 03:59:18 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.390 03:59:18 -- common/autotest_common.sh@817 -- # '[' -z 168097 ']' 00:08:04.390 03:59:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.390 03:59:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:04.390 03:59:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.390 03:59:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:04.390 03:59:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.390 [2024-04-19 03:59:18.857782] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:08:04.390 [2024-04-19 03:59:18.857824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.390 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.390 [2024-04-19 03:59:18.906807] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.650 [2024-04-19 03:59:18.979237] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.650 [2024-04-19 03:59:18.979270] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.650 [2024-04-19 03:59:18.979277] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.650 [2024-04-19 03:59:18.979282] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.650 [2024-04-19 03:59:18.979287] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.650 [2024-04-19 03:59:18.979324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.650 [2024-04-19 03:59:18.979423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.650 [2024-04-19 03:59:18.979504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.650 [2024-04-19 03:59:18.979505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.220 03:59:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:05.220 03:59:19 -- common/autotest_common.sh@850 -- # return 0 00:08:05.220 03:59:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:05.220 03:59:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:05.220 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.220 03:59:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.220 03:59:19 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:05.220 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.220 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.220 [2024-04-19 03:59:19.709677] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x178b6c0/0x178fbb0) succeed. 00:08:05.220 [2024-04-19 03:59:19.719014] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x178ccb0/0x17d1240) succeed. 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@26 -- # seq 1 4 00:08:05.480 03:59:19 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.480 03:59:19 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 Null1 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 [2024-04-19 03:59:19.872247] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.480 03:59:19 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 Null2 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.480 03:59:19 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 Null3 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.480 03:59:19 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 Null4 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:05.480 03:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.480 03:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 03:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.480 03:59:19 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:08:05.741 00:08:05.741 Discovery Log Number of Records 6, Generation counter 6 00:08:05.741 =====Discovery Log Entry 0====== 00:08:05.741 trtype: rdma 00:08:05.741 adrfam: ipv4 00:08:05.741 subtype: current discovery subsystem 00:08:05.741 treq: not required 00:08:05.741 portid: 0 00:08:05.741 trsvcid: 4420 00:08:05.741 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:05.741 traddr: 192.168.100.8 00:08:05.741 eflags: explicit discovery connections, duplicate discovery information 00:08:05.741 rdma_prtype: not specified 00:08:05.741 rdma_qptype: connected 00:08:05.741 rdma_cms: rdma-cm 00:08:05.741 rdma_pkey: 0x0000 00:08:05.741 =====Discovery Log Entry 1====== 00:08:05.741 trtype: rdma 00:08:05.741 adrfam: ipv4 00:08:05.741 subtype: nvme subsystem 00:08:05.741 treq: not required 00:08:05.741 portid: 0 00:08:05.741 trsvcid: 4420 00:08:05.741 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:05.741 traddr: 192.168.100.8 00:08:05.741 eflags: none 00:08:05.741 rdma_prtype: not specified 00:08:05.741 rdma_qptype: connected 00:08:05.741 rdma_cms: rdma-cm 00:08:05.741 rdma_pkey: 0x0000 00:08:05.741 =====Discovery Log Entry 2====== 00:08:05.741 trtype: rdma 00:08:05.741 adrfam: ipv4 00:08:05.741 subtype: nvme subsystem 00:08:05.741 treq: not required 00:08:05.741 portid: 0 00:08:05.741 trsvcid: 4420 00:08:05.741 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:05.741 traddr: 192.168.100.8 00:08:05.741 eflags: none 00:08:05.741 rdma_prtype: not specified 00:08:05.741 rdma_qptype: connected 00:08:05.741 rdma_cms: rdma-cm 00:08:05.741 rdma_pkey: 0x0000 00:08:05.741 =====Discovery Log Entry 3====== 00:08:05.741 trtype: rdma 00:08:05.741 adrfam: ipv4 00:08:05.741 subtype: nvme subsystem 00:08:05.741 treq: not required 00:08:05.741 portid: 0 00:08:05.741 trsvcid: 4420 00:08:05.741 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:05.741 traddr: 192.168.100.8 00:08:05.741 eflags: none 00:08:05.741 rdma_prtype: not specified 00:08:05.741 rdma_qptype: connected 00:08:05.741 rdma_cms: rdma-cm 00:08:05.741 rdma_pkey: 0x0000 00:08:05.741 =====Discovery Log Entry 4====== 00:08:05.741 trtype: rdma 00:08:05.741 adrfam: ipv4 00:08:05.741 subtype: nvme subsystem 00:08:05.741 treq: not required 00:08:05.741 portid: 0 00:08:05.741 trsvcid: 4420 00:08:05.741 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:05.741 traddr: 192.168.100.8 00:08:05.741 eflags: none 00:08:05.741 rdma_prtype: not specified 00:08:05.741 rdma_qptype: connected 00:08:05.741 rdma_cms: rdma-cm 00:08:05.741 rdma_pkey: 0x0000 00:08:05.741 =====Discovery Log Entry 5====== 00:08:05.741 trtype: rdma 00:08:05.741 adrfam: ipv4 00:08:05.741 subtype: discovery subsystem referral 00:08:05.741 treq: not required 00:08:05.741 portid: 0 00:08:05.741 trsvcid: 4430 00:08:05.741 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:05.741 traddr: 192.168.100.8 00:08:05.741 eflags: none 00:08:05.741 rdma_prtype: unrecognized 00:08:05.741 rdma_qptype: unrecognized 00:08:05.741 rdma_cms: unrecognized 00:08:05.741 rdma_pkey: 0x0000 00:08:05.741 03:59:20 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:05.741 Perform nvmf subsystem discovery via RPC 00:08:05.741 03:59:20 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:05.741 03:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.741 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:05.741 [2024-04-19 03:59:20.084635] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:05.741 [ 00:08:05.741 { 00:08:05.741 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:05.741 "subtype": "Discovery", 00:08:05.741 "listen_addresses": [ 00:08:05.741 { 00:08:05.741 "transport": "RDMA", 00:08:05.741 "trtype": "RDMA", 00:08:05.741 "adrfam": "IPv4", 00:08:05.741 "traddr": "192.168.100.8", 00:08:05.741 "trsvcid": "4420" 00:08:05.741 } 00:08:05.741 ], 00:08:05.741 "allow_any_host": true, 00:08:05.741 "hosts": [] 00:08:05.741 }, 00:08:05.741 { 00:08:05.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:05.741 "subtype": "NVMe", 00:08:05.741 "listen_addresses": [ 00:08:05.741 { 00:08:05.741 "transport": "RDMA", 00:08:05.741 "trtype": "RDMA", 00:08:05.741 "adrfam": "IPv4", 00:08:05.741 "traddr": "192.168.100.8", 00:08:05.741 "trsvcid": "4420" 00:08:05.741 } 00:08:05.741 ], 00:08:05.741 "allow_any_host": true, 00:08:05.741 "hosts": [], 00:08:05.741 "serial_number": "SPDK00000000000001", 00:08:05.741 "model_number": "SPDK bdev Controller", 00:08:05.741 "max_namespaces": 32, 00:08:05.741 "min_cntlid": 1, 00:08:05.741 "max_cntlid": 65519, 00:08:05.741 "namespaces": [ 00:08:05.741 { 00:08:05.741 "nsid": 1, 00:08:05.741 "bdev_name": "Null1", 00:08:05.741 "name": "Null1", 00:08:05.741 "nguid": "E5294E0151604A3A8FDC94DDC0AC33D1", 00:08:05.741 "uuid": "e5294e01-5160-4a3a-8fdc-94ddc0ac33d1" 00:08:05.741 } 00:08:05.741 ] 00:08:05.741 }, 00:08:05.741 { 00:08:05.741 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:05.741 "subtype": "NVMe", 00:08:05.741 "listen_addresses": [ 00:08:05.741 { 00:08:05.741 "transport": "RDMA", 00:08:05.741 "trtype": "RDMA", 00:08:05.741 "adrfam": "IPv4", 00:08:05.741 "traddr": "192.168.100.8", 00:08:05.741 "trsvcid": "4420" 00:08:05.741 } 00:08:05.741 ], 00:08:05.741 "allow_any_host": true, 00:08:05.741 "hosts": [], 00:08:05.741 "serial_number": "SPDK00000000000002", 00:08:05.741 "model_number": "SPDK bdev Controller", 00:08:05.741 "max_namespaces": 32, 00:08:05.741 "min_cntlid": 1, 00:08:05.741 "max_cntlid": 65519, 00:08:05.741 "namespaces": [ 00:08:05.741 { 00:08:05.741 "nsid": 1, 00:08:05.741 "bdev_name": "Null2", 00:08:05.741 "name": "Null2", 00:08:05.741 "nguid": "32788D2C04F5424B9209567DB4888EDE", 00:08:05.741 "uuid": "32788d2c-04f5-424b-9209-567db4888ede" 00:08:05.741 } 00:08:05.741 ] 00:08:05.741 }, 00:08:05.741 { 00:08:05.741 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:05.741 "subtype": "NVMe", 00:08:05.741 "listen_addresses": [ 00:08:05.741 { 00:08:05.741 "transport": "RDMA", 00:08:05.741 "trtype": "RDMA", 00:08:05.741 "adrfam": "IPv4", 00:08:05.741 "traddr": "192.168.100.8", 00:08:05.741 "trsvcid": "4420" 00:08:05.741 } 00:08:05.741 ], 00:08:05.741 "allow_any_host": true, 00:08:05.741 "hosts": [], 00:08:05.741 "serial_number": "SPDK00000000000003", 00:08:05.741 "model_number": "SPDK bdev Controller", 00:08:05.741 "max_namespaces": 32, 00:08:05.741 "min_cntlid": 1, 00:08:05.741 "max_cntlid": 65519, 00:08:05.741 "namespaces": [ 00:08:05.741 { 00:08:05.741 "nsid": 1, 00:08:05.741 "bdev_name": "Null3", 00:08:05.741 "name": "Null3", 00:08:05.742 "nguid": "6D91B5B870034534BB95BE0BB5C5B415", 00:08:05.742 "uuid": "6d91b5b8-7003-4534-bb95-be0bb5c5b415" 00:08:05.742 } 00:08:05.742 ] 00:08:05.742 }, 00:08:05.742 { 00:08:05.742 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:05.742 "subtype": "NVMe", 00:08:05.742 "listen_addresses": [ 00:08:05.742 { 00:08:05.742 "transport": "RDMA", 00:08:05.742 "trtype": "RDMA", 00:08:05.742 "adrfam": "IPv4", 00:08:05.742 "traddr": "192.168.100.8", 00:08:05.742 "trsvcid": "4420" 00:08:05.742 } 00:08:05.742 ], 00:08:05.742 "allow_any_host": true, 00:08:05.742 "hosts": [], 00:08:05.742 "serial_number": "SPDK00000000000004", 00:08:05.742 "model_number": "SPDK bdev Controller", 00:08:05.742 "max_namespaces": 32, 00:08:05.742 "min_cntlid": 1, 00:08:05.742 "max_cntlid": 65519, 00:08:05.742 "namespaces": [ 00:08:05.742 { 00:08:05.742 "nsid": 1, 00:08:05.742 "bdev_name": "Null4", 00:08:05.742 "name": "Null4", 00:08:05.742 "nguid": "3B9415E920B74EC399D16EC4A73109ED", 00:08:05.742 "uuid": "3b9415e9-20b7-4ec3-99d1-6ec4a73109ed" 00:08:05.742 } 00:08:05.742 ] 00:08:05.742 } 00:08:05.742 ] 00:08:05.742 03:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.742 03:59:20 -- target/discovery.sh@42 -- # seq 1 4 00:08:05.742 03:59:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:05.742 03:59:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.742 03:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.742 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 03:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.742 03:59:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:05.742 03:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.742 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 03:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.742 03:59:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:05.742 03:59:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:05.742 03:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.742 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 03:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.742 03:59:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:05.742 03:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.742 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 03:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.742 03:59:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:05.742 03:59:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:05.742 03:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.742 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 03:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.742 03:59:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:05.742 03:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.742 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 03:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.742 03:59:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:05.742 03:59:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:05.742 03:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.742 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 03:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.742 03:59:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:05.742 03:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.742 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 03:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.742 03:59:20 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:05.742 03:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.742 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 03:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.742 03:59:20 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:05.742 03:59:20 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:05.742 03:59:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.742 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:05.742 03:59:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.742 03:59:20 -- target/discovery.sh@49 -- # check_bdevs= 00:08:05.742 03:59:20 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:05.742 03:59:20 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:05.742 03:59:20 -- target/discovery.sh@57 -- # nvmftestfini 00:08:05.742 03:59:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:05.742 03:59:20 -- nvmf/common.sh@117 -- # sync 00:08:05.742 03:59:20 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:05.742 03:59:20 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:05.742 03:59:20 -- nvmf/common.sh@120 -- # set +e 00:08:05.742 03:59:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.742 03:59:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:05.742 rmmod nvme_rdma 00:08:05.742 rmmod nvme_fabrics 00:08:05.742 03:59:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.742 03:59:20 -- nvmf/common.sh@124 -- # set -e 00:08:05.742 03:59:20 -- nvmf/common.sh@125 -- # return 0 00:08:05.742 03:59:20 -- nvmf/common.sh@478 -- # '[' -n 168097 ']' 00:08:05.742 03:59:20 -- nvmf/common.sh@479 -- # killprocess 168097 00:08:05.742 03:59:20 -- common/autotest_common.sh@936 -- # '[' -z 168097 ']' 00:08:05.742 03:59:20 -- common/autotest_common.sh@940 -- # kill -0 168097 00:08:06.001 03:59:20 -- common/autotest_common.sh@941 -- # uname 00:08:06.001 03:59:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:06.001 03:59:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 168097 00:08:06.001 03:59:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:06.001 03:59:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:06.001 03:59:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 168097' 00:08:06.001 killing process with pid 168097 00:08:06.001 03:59:20 -- common/autotest_common.sh@955 -- # kill 168097 00:08:06.001 [2024-04-19 03:59:20.315221] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:06.001 03:59:20 -- common/autotest_common.sh@960 -- # wait 168097 00:08:06.261 03:59:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:06.261 03:59:20 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:06.261 00:08:06.261 real 0m7.271s 00:08:06.261 user 0m7.923s 00:08:06.261 sys 0m4.471s 00:08:06.261 03:59:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:06.261 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:06.261 ************************************ 00:08:06.261 END TEST nvmf_discovery 00:08:06.261 ************************************ 00:08:06.261 03:59:20 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:06.261 03:59:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:06.261 03:59:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.261 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:06.261 ************************************ 00:08:06.261 START TEST nvmf_referrals 00:08:06.261 ************************************ 00:08:06.261 03:59:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:06.521 * Looking for test storage... 00:08:06.521 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:06.521 03:59:20 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.521 03:59:20 -- nvmf/common.sh@7 -- # uname -s 00:08:06.521 03:59:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.521 03:59:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.521 03:59:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.521 03:59:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.521 03:59:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.521 03:59:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.521 03:59:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.521 03:59:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.521 03:59:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.521 03:59:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.521 03:59:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:06.521 03:59:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:06.521 03:59:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.521 03:59:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.521 03:59:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.521 03:59:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.521 03:59:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:06.521 03:59:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.521 03:59:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.521 03:59:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.522 03:59:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.522 03:59:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.522 03:59:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.522 03:59:20 -- paths/export.sh@5 -- # export PATH 00:08:06.522 03:59:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.522 03:59:20 -- nvmf/common.sh@47 -- # : 0 00:08:06.522 03:59:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.522 03:59:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.522 03:59:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.522 03:59:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.522 03:59:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.522 03:59:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.522 03:59:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.522 03:59:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.522 03:59:20 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:06.522 03:59:20 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:06.522 03:59:20 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:06.522 03:59:20 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:06.522 03:59:20 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:06.522 03:59:20 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:06.522 03:59:20 -- target/referrals.sh@37 -- # nvmftestinit 00:08:06.522 03:59:20 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:06.522 03:59:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.522 03:59:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:06.522 03:59:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:06.522 03:59:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:06.522 03:59:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.522 03:59:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.522 03:59:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.522 03:59:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:06.522 03:59:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:06.522 03:59:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:06.522 03:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:11.805 03:59:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:11.805 03:59:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:11.805 03:59:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:11.805 03:59:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:11.805 03:59:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:11.805 03:59:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:11.805 03:59:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:11.805 03:59:26 -- nvmf/common.sh@295 -- # net_devs=() 00:08:11.805 03:59:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:11.805 03:59:26 -- nvmf/common.sh@296 -- # e810=() 00:08:11.805 03:59:26 -- nvmf/common.sh@296 -- # local -ga e810 00:08:11.805 03:59:26 -- nvmf/common.sh@297 -- # x722=() 00:08:11.805 03:59:26 -- nvmf/common.sh@297 -- # local -ga x722 00:08:11.805 03:59:26 -- nvmf/common.sh@298 -- # mlx=() 00:08:11.805 03:59:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:11.805 03:59:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.805 03:59:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.805 03:59:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.805 03:59:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.805 03:59:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.805 03:59:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.805 03:59:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.805 03:59:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.805 03:59:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.805 03:59:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.805 03:59:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.805 03:59:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:11.805 03:59:26 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:11.805 03:59:26 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:11.805 03:59:26 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:11.805 03:59:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:11.805 03:59:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.805 03:59:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:11.805 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:11.805 03:59:26 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:11.805 03:59:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.805 03:59:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:11.805 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:11.805 03:59:26 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:11.805 03:59:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:11.805 03:59:26 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.805 03:59:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.805 03:59:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:11.805 03:59:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.805 03:59:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:11.805 Found net devices under 0000:18:00.0: mlx_0_0 00:08:11.805 03:59:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.805 03:59:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.805 03:59:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.805 03:59:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:11.805 03:59:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.805 03:59:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:11.805 Found net devices under 0000:18:00.1: mlx_0_1 00:08:11.805 03:59:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.805 03:59:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:11.805 03:59:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:11.805 03:59:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:11.805 03:59:26 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:11.805 03:59:26 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:11.805 03:59:26 -- nvmf/common.sh@58 -- # uname 00:08:11.805 03:59:26 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:11.805 03:59:26 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:11.805 03:59:26 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:11.805 03:59:26 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:11.805 03:59:26 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:11.805 03:59:26 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:11.806 03:59:26 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:11.806 03:59:26 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:11.806 03:59:26 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:11.806 03:59:26 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:11.806 03:59:26 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:11.806 03:59:26 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:11.806 03:59:26 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:11.806 03:59:26 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:11.806 03:59:26 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:11.806 03:59:26 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:11.806 03:59:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:11.806 03:59:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:11.806 03:59:26 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:11.806 03:59:26 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:11.806 03:59:26 -- nvmf/common.sh@105 -- # continue 2 00:08:11.806 03:59:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:11.806 03:59:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:11.806 03:59:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:11.806 03:59:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:11.806 03:59:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:11.806 03:59:26 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:11.806 03:59:26 -- nvmf/common.sh@105 -- # continue 2 00:08:11.806 03:59:26 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:11.806 03:59:26 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:11.806 03:59:26 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:11.806 03:59:26 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:11.806 03:59:26 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:11.806 03:59:26 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:11.806 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:11.806 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:11.806 altname enp24s0f0np0 00:08:11.806 altname ens785f0np0 00:08:11.806 inet 192.168.100.8/24 scope global mlx_0_0 00:08:11.806 valid_lft forever preferred_lft forever 00:08:11.806 03:59:26 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:11.806 03:59:26 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:11.806 03:59:26 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:11.806 03:59:26 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:11.806 03:59:26 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:11.806 03:59:26 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:11.806 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:11.806 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:11.806 altname enp24s0f1np1 00:08:11.806 altname ens785f1np1 00:08:11.806 inet 192.168.100.9/24 scope global mlx_0_1 00:08:11.806 valid_lft forever preferred_lft forever 00:08:11.806 03:59:26 -- nvmf/common.sh@411 -- # return 0 00:08:11.806 03:59:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:11.806 03:59:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:11.806 03:59:26 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:11.806 03:59:26 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:11.806 03:59:26 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:11.806 03:59:26 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:11.806 03:59:26 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:11.806 03:59:26 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:11.806 03:59:26 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:11.806 03:59:26 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:11.806 03:59:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:11.806 03:59:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:11.806 03:59:26 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:11.806 03:59:26 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:11.806 03:59:26 -- nvmf/common.sh@105 -- # continue 2 00:08:11.806 03:59:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:11.806 03:59:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:11.806 03:59:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:11.806 03:59:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:11.806 03:59:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:11.806 03:59:26 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:11.806 03:59:26 -- nvmf/common.sh@105 -- # continue 2 00:08:11.806 03:59:26 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:11.806 03:59:26 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:11.806 03:59:26 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:11.806 03:59:26 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:11.806 03:59:26 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:11.806 03:59:26 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:11.806 03:59:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:11.806 03:59:26 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:11.806 192.168.100.9' 00:08:11.806 03:59:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:11.806 192.168.100.9' 00:08:11.806 03:59:26 -- nvmf/common.sh@446 -- # head -n 1 00:08:11.806 03:59:26 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:11.806 03:59:26 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:11.806 192.168.100.9' 00:08:11.806 03:59:26 -- nvmf/common.sh@447 -- # head -n 1 00:08:11.806 03:59:26 -- nvmf/common.sh@447 -- # tail -n +2 00:08:11.806 03:59:26 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:11.806 03:59:26 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:11.806 03:59:26 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:11.806 03:59:26 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:11.806 03:59:26 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:11.806 03:59:26 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:12.067 03:59:26 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:12.067 03:59:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:12.067 03:59:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:12.067 03:59:26 -- common/autotest_common.sh@10 -- # set +x 00:08:12.067 03:59:26 -- nvmf/common.sh@470 -- # nvmfpid=171646 00:08:12.067 03:59:26 -- nvmf/common.sh@471 -- # waitforlisten 171646 00:08:12.067 03:59:26 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.067 03:59:26 -- common/autotest_common.sh@817 -- # '[' -z 171646 ']' 00:08:12.067 03:59:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.067 03:59:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:12.067 03:59:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.067 03:59:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:12.067 03:59:26 -- common/autotest_common.sh@10 -- # set +x 00:08:12.067 [2024-04-19 03:59:26.396959] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:08:12.067 [2024-04-19 03:59:26.397004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.067 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.067 [2024-04-19 03:59:26.450026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.067 [2024-04-19 03:59:26.520281] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.067 [2024-04-19 03:59:26.520321] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.067 [2024-04-19 03:59:26.520327] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.067 [2024-04-19 03:59:26.520332] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.067 [2024-04-19 03:59:26.520336] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.067 [2024-04-19 03:59:26.520397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.067 [2024-04-19 03:59:26.520498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.067 [2024-04-19 03:59:26.520517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.067 [2024-04-19 03:59:26.520524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.006 03:59:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:13.006 03:59:27 -- common/autotest_common.sh@850 -- # return 0 00:08:13.006 03:59:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:13.006 03:59:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:13.006 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.006 03:59:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.006 03:59:27 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:13.006 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.006 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.006 [2024-04-19 03:59:27.242185] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19956c0/0x1999bb0) succeed. 00:08:13.006 [2024-04-19 03:59:27.251560] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1996cb0/0x19db240) succeed. 00:08:13.006 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.006 03:59:27 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:13.006 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.006 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.006 [2024-04-19 03:59:27.366260] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:13.006 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.006 03:59:27 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:13.006 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.006 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.006 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.006 03:59:27 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:13.006 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.006 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.006 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.006 03:59:27 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:13.006 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.007 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.007 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.007 03:59:27 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.007 03:59:27 -- target/referrals.sh@48 -- # jq length 00:08:13.007 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.007 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.007 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.007 03:59:27 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:13.007 03:59:27 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:13.007 03:59:27 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.007 03:59:27 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.007 03:59:27 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.007 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.007 03:59:27 -- target/referrals.sh@21 -- # sort 00:08:13.007 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.007 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.007 03:59:27 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:13.007 03:59:27 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:13.007 03:59:27 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:13.007 03:59:27 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.007 03:59:27 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.007 03:59:27 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:13.007 03:59:27 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.007 03:59:27 -- target/referrals.sh@26 -- # sort 00:08:13.267 03:59:27 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:13.267 03:59:27 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:13.267 03:59:27 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:13.267 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.267 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.267 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.267 03:59:27 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:13.267 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.267 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.267 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.267 03:59:27 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:13.267 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.267 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.267 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.267 03:59:27 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.267 03:59:27 -- target/referrals.sh@56 -- # jq length 00:08:13.267 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.267 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.267 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.267 03:59:27 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:13.267 03:59:27 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:13.267 03:59:27 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.267 03:59:27 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.267 03:59:27 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.267 03:59:27 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:13.267 03:59:27 -- target/referrals.sh@26 -- # sort 00:08:13.267 03:59:27 -- target/referrals.sh@26 -- # echo 00:08:13.267 03:59:27 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:13.267 03:59:27 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:13.267 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.267 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.267 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.267 03:59:27 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:13.267 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.267 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.267 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.267 03:59:27 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:13.267 03:59:27 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.267 03:59:27 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.267 03:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.267 03:59:27 -- target/referrals.sh@21 -- # sort 00:08:13.267 03:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:13.267 03:59:27 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.267 03:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.527 03:59:27 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:13.527 03:59:27 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:13.527 03:59:27 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:13.527 03:59:27 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.527 03:59:27 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.527 03:59:27 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:13.527 03:59:27 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.527 03:59:27 -- target/referrals.sh@26 -- # sort 00:08:13.527 03:59:27 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:13.527 03:59:27 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:13.527 03:59:27 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:13.527 03:59:27 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:13.527 03:59:27 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:13.527 03:59:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:13.527 03:59:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:13.527 03:59:27 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:13.527 03:59:27 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:13.527 03:59:27 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:13.527 03:59:27 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:13.527 03:59:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:13.527 03:59:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:13.787 03:59:28 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:13.787 03:59:28 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:13.787 03:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.787 03:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:13.787 03:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.787 03:59:28 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:13.787 03:59:28 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.787 03:59:28 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.787 03:59:28 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.787 03:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:13.787 03:59:28 -- target/referrals.sh@21 -- # sort 00:08:13.787 03:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:13.787 03:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:13.787 03:59:28 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:13.788 03:59:28 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.788 03:59:28 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:13.788 03:59:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.788 03:59:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.788 03:59:28 -- target/referrals.sh@26 -- # sort 00:08:13.788 03:59:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:13.788 03:59:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.788 03:59:28 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:13.788 03:59:28 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.788 03:59:28 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:13.788 03:59:28 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:13.788 03:59:28 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:13.788 03:59:28 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:13.788 03:59:28 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:14.047 03:59:28 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:14.047 03:59:28 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:14.047 03:59:28 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:14.047 03:59:28 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:14.047 03:59:28 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:14.047 03:59:28 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:14.047 03:59:28 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:14.047 03:59:28 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:14.047 03:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.047 03:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:14.047 03:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.047 03:59:28 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:14.047 03:59:28 -- target/referrals.sh@82 -- # jq length 00:08:14.047 03:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.047 03:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:14.047 03:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.047 03:59:28 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:14.047 03:59:28 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:14.047 03:59:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:14.047 03:59:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:14.047 03:59:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:14.047 03:59:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:14.047 03:59:28 -- target/referrals.sh@26 -- # sort 00:08:14.047 03:59:28 -- target/referrals.sh@26 -- # echo 00:08:14.047 03:59:28 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:14.047 03:59:28 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:14.047 03:59:28 -- target/referrals.sh@86 -- # nvmftestfini 00:08:14.047 03:59:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:14.047 03:59:28 -- nvmf/common.sh@117 -- # sync 00:08:14.047 03:59:28 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:14.047 03:59:28 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:14.047 03:59:28 -- nvmf/common.sh@120 -- # set +e 00:08:14.047 03:59:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.047 03:59:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:14.047 rmmod nvme_rdma 00:08:14.047 rmmod nvme_fabrics 00:08:14.307 03:59:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.307 03:59:28 -- nvmf/common.sh@124 -- # set -e 00:08:14.307 03:59:28 -- nvmf/common.sh@125 -- # return 0 00:08:14.307 03:59:28 -- nvmf/common.sh@478 -- # '[' -n 171646 ']' 00:08:14.307 03:59:28 -- nvmf/common.sh@479 -- # killprocess 171646 00:08:14.307 03:59:28 -- common/autotest_common.sh@936 -- # '[' -z 171646 ']' 00:08:14.307 03:59:28 -- common/autotest_common.sh@940 -- # kill -0 171646 00:08:14.307 03:59:28 -- common/autotest_common.sh@941 -- # uname 00:08:14.307 03:59:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:14.307 03:59:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 171646 00:08:14.307 03:59:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:14.307 03:59:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:14.307 03:59:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 171646' 00:08:14.307 killing process with pid 171646 00:08:14.307 03:59:28 -- common/autotest_common.sh@955 -- # kill 171646 00:08:14.307 03:59:28 -- common/autotest_common.sh@960 -- # wait 171646 00:08:14.567 03:59:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:14.567 03:59:28 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:14.567 00:08:14.567 real 0m8.177s 00:08:14.567 user 0m11.783s 00:08:14.567 sys 0m4.809s 00:08:14.567 03:59:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:14.567 03:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:14.567 ************************************ 00:08:14.567 END TEST nvmf_referrals 00:08:14.567 ************************************ 00:08:14.567 03:59:28 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:14.567 03:59:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:14.567 03:59:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.567 03:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:14.567 ************************************ 00:08:14.567 START TEST nvmf_connect_disconnect 00:08:14.567 ************************************ 00:08:14.567 03:59:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:14.828 * Looking for test storage... 00:08:14.828 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:14.828 03:59:29 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.828 03:59:29 -- nvmf/common.sh@7 -- # uname -s 00:08:14.828 03:59:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.828 03:59:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.828 03:59:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.828 03:59:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.828 03:59:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.828 03:59:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.828 03:59:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.828 03:59:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.828 03:59:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.828 03:59:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.828 03:59:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:14.828 03:59:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:14.828 03:59:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.828 03:59:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.828 03:59:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.828 03:59:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.828 03:59:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:14.828 03:59:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.828 03:59:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.828 03:59:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.828 03:59:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.828 03:59:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.828 03:59:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.828 03:59:29 -- paths/export.sh@5 -- # export PATH 00:08:14.828 03:59:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.828 03:59:29 -- nvmf/common.sh@47 -- # : 0 00:08:14.828 03:59:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.828 03:59:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.828 03:59:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.828 03:59:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.828 03:59:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.828 03:59:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.828 03:59:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.828 03:59:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.828 03:59:29 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.828 03:59:29 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.828 03:59:29 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:14.828 03:59:29 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:14.828 03:59:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.828 03:59:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:14.828 03:59:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:14.828 03:59:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:14.828 03:59:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.828 03:59:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.828 03:59:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.828 03:59:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:14.828 03:59:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:14.828 03:59:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:14.828 03:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:21.406 03:59:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:21.406 03:59:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.406 03:59:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.406 03:59:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.406 03:59:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.406 03:59:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.406 03:59:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.406 03:59:34 -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.406 03:59:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.406 03:59:34 -- nvmf/common.sh@296 -- # e810=() 00:08:21.406 03:59:34 -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.406 03:59:34 -- nvmf/common.sh@297 -- # x722=() 00:08:21.406 03:59:34 -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.406 03:59:34 -- nvmf/common.sh@298 -- # mlx=() 00:08:21.406 03:59:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.406 03:59:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.406 03:59:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.406 03:59:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.406 03:59:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.406 03:59:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.406 03:59:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.406 03:59:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.406 03:59:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.406 03:59:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.406 03:59:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.406 03:59:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.406 03:59:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.406 03:59:34 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:21.406 03:59:34 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:21.406 03:59:34 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:21.406 03:59:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.406 03:59:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.406 03:59:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:21.406 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:21.406 03:59:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:21.406 03:59:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.406 03:59:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:21.406 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:21.406 03:59:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:21.406 03:59:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.406 03:59:34 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.406 03:59:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.406 03:59:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:21.406 03:59:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.406 03:59:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:21.406 Found net devices under 0000:18:00.0: mlx_0_0 00:08:21.406 03:59:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.406 03:59:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.406 03:59:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.406 03:59:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:21.406 03:59:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.406 03:59:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:21.406 Found net devices under 0000:18:00.1: mlx_0_1 00:08:21.406 03:59:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.406 03:59:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:21.406 03:59:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:21.406 03:59:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:21.406 03:59:34 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:21.406 03:59:34 -- nvmf/common.sh@58 -- # uname 00:08:21.406 03:59:34 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:21.406 03:59:34 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:21.406 03:59:34 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:21.406 03:59:34 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:21.406 03:59:34 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:21.406 03:59:34 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:21.406 03:59:34 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:21.406 03:59:34 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:21.406 03:59:34 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:21.406 03:59:34 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:21.406 03:59:34 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:21.406 03:59:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:21.406 03:59:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:21.406 03:59:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:21.406 03:59:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:21.406 03:59:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:21.406 03:59:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:21.406 03:59:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.406 03:59:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:21.406 03:59:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:21.406 03:59:34 -- nvmf/common.sh@105 -- # continue 2 00:08:21.406 03:59:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:21.407 03:59:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.407 03:59:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:21.407 03:59:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.407 03:59:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:21.407 03:59:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:21.407 03:59:34 -- nvmf/common.sh@105 -- # continue 2 00:08:21.407 03:59:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:21.407 03:59:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:21.407 03:59:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:21.407 03:59:34 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:21.407 03:59:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:21.407 03:59:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:21.407 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:21.407 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:21.407 altname enp24s0f0np0 00:08:21.407 altname ens785f0np0 00:08:21.407 inet 192.168.100.8/24 scope global mlx_0_0 00:08:21.407 valid_lft forever preferred_lft forever 00:08:21.407 03:59:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:21.407 03:59:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:21.407 03:59:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:21.407 03:59:34 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:21.407 03:59:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:21.407 03:59:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:21.407 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:21.407 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:21.407 altname enp24s0f1np1 00:08:21.407 altname ens785f1np1 00:08:21.407 inet 192.168.100.9/24 scope global mlx_0_1 00:08:21.407 valid_lft forever preferred_lft forever 00:08:21.407 03:59:34 -- nvmf/common.sh@411 -- # return 0 00:08:21.407 03:59:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:21.407 03:59:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:21.407 03:59:34 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:21.407 03:59:34 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:21.407 03:59:34 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:21.407 03:59:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:21.407 03:59:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:21.407 03:59:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:21.407 03:59:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:21.407 03:59:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:21.407 03:59:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:21.407 03:59:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.407 03:59:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:21.407 03:59:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:21.407 03:59:34 -- nvmf/common.sh@105 -- # continue 2 00:08:21.407 03:59:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:21.407 03:59:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.407 03:59:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:21.407 03:59:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.407 03:59:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:21.407 03:59:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:21.407 03:59:34 -- nvmf/common.sh@105 -- # continue 2 00:08:21.407 03:59:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:21.407 03:59:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:21.407 03:59:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:21.407 03:59:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:21.407 03:59:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:21.407 03:59:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:21.407 03:59:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:21.407 03:59:34 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:21.407 192.168.100.9' 00:08:21.407 03:59:34 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:21.407 192.168.100.9' 00:08:21.407 03:59:34 -- nvmf/common.sh@446 -- # head -n 1 00:08:21.407 03:59:34 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:21.407 03:59:34 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:21.407 192.168.100.9' 00:08:21.407 03:59:34 -- nvmf/common.sh@447 -- # tail -n +2 00:08:21.407 03:59:34 -- nvmf/common.sh@447 -- # head -n 1 00:08:21.407 03:59:34 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:21.407 03:59:34 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:21.407 03:59:34 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:21.407 03:59:34 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:21.407 03:59:34 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:21.407 03:59:34 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:21.407 03:59:34 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:21.407 03:59:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:21.407 03:59:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:21.407 03:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:21.407 03:59:34 -- nvmf/common.sh@470 -- # nvmfpid=175516 00:08:21.407 03:59:34 -- nvmf/common.sh@471 -- # waitforlisten 175516 00:08:21.407 03:59:34 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.407 03:59:34 -- common/autotest_common.sh@817 -- # '[' -z 175516 ']' 00:08:21.407 03:59:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.407 03:59:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:21.407 03:59:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.407 03:59:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:21.407 03:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:21.407 [2024-04-19 03:59:34.920013] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:08:21.407 [2024-04-19 03:59:34.920065] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.407 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.407 [2024-04-19 03:59:34.971741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.407 [2024-04-19 03:59:35.047190] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.407 [2024-04-19 03:59:35.047224] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.407 [2024-04-19 03:59:35.047231] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.407 [2024-04-19 03:59:35.047236] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.407 [2024-04-19 03:59:35.047241] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.407 [2024-04-19 03:59:35.047287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.407 [2024-04-19 03:59:35.047368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.407 [2024-04-19 03:59:35.047455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.407 [2024-04-19 03:59:35.047456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.407 03:59:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:21.407 03:59:35 -- common/autotest_common.sh@850 -- # return 0 00:08:21.407 03:59:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:21.407 03:59:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:21.407 03:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.407 03:59:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.407 03:59:35 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:21.407 03:59:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.407 03:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.407 [2024-04-19 03:59:35.753113] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:21.407 [2024-04-19 03:59:35.772265] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23ba6c0/0x23bebb0) succeed. 00:08:21.407 [2024-04-19 03:59:35.781710] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23bbcb0/0x2400240) succeed. 00:08:21.407 03:59:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.407 03:59:35 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:21.407 03:59:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.407 03:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.407 03:59:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.407 03:59:35 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:21.407 03:59:35 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:21.407 03:59:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.407 03:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.407 03:59:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.407 03:59:35 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:21.407 03:59:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.407 03:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.408 03:59:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.408 03:59:35 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:21.408 03:59:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:21.408 03:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.408 [2024-04-19 03:59:35.911102] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:21.408 03:59:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.408 03:59:35 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:21.408 03:59:35 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:21.408 03:59:35 -- target/connect_disconnect.sh@34 -- # set +x 00:08:25.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.495 03:59:55 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:41.495 03:59:55 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:41.495 03:59:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:41.495 03:59:55 -- nvmf/common.sh@117 -- # sync 00:08:41.495 03:59:55 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:41.495 03:59:55 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:41.495 03:59:55 -- nvmf/common.sh@120 -- # set +e 00:08:41.495 03:59:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.495 03:59:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:41.495 rmmod nvme_rdma 00:08:41.495 rmmod nvme_fabrics 00:08:41.495 03:59:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.495 03:59:55 -- nvmf/common.sh@124 -- # set -e 00:08:41.495 03:59:55 -- nvmf/common.sh@125 -- # return 0 00:08:41.495 03:59:55 -- nvmf/common.sh@478 -- # '[' -n 175516 ']' 00:08:41.495 03:59:55 -- nvmf/common.sh@479 -- # killprocess 175516 00:08:41.495 03:59:55 -- common/autotest_common.sh@936 -- # '[' -z 175516 ']' 00:08:41.495 03:59:55 -- common/autotest_common.sh@940 -- # kill -0 175516 00:08:41.495 03:59:55 -- common/autotest_common.sh@941 -- # uname 00:08:41.495 03:59:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:41.495 03:59:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 175516 00:08:41.495 03:59:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:41.495 03:59:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:41.495 03:59:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 175516' 00:08:41.495 killing process with pid 175516 00:08:41.495 03:59:55 -- common/autotest_common.sh@955 -- # kill 175516 00:08:41.495 03:59:55 -- common/autotest_common.sh@960 -- # wait 175516 00:08:41.756 03:59:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:41.756 03:59:56 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:41.756 00:08:41.756 real 0m27.008s 00:08:41.756 user 1m25.246s 00:08:41.756 sys 0m5.144s 00:08:41.756 03:59:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:41.756 03:59:56 -- common/autotest_common.sh@10 -- # set +x 00:08:41.756 ************************************ 00:08:41.756 END TEST nvmf_connect_disconnect 00:08:41.756 ************************************ 00:08:41.756 03:59:56 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:08:41.756 03:59:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:41.756 03:59:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.756 03:59:56 -- common/autotest_common.sh@10 -- # set +x 00:08:41.756 ************************************ 00:08:41.756 START TEST nvmf_multitarget 00:08:41.756 ************************************ 00:08:41.756 03:59:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:08:42.017 * Looking for test storage... 00:08:42.017 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:42.017 03:59:56 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.017 03:59:56 -- nvmf/common.sh@7 -- # uname -s 00:08:42.017 03:59:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.017 03:59:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.017 03:59:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.017 03:59:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.017 03:59:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.017 03:59:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.017 03:59:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.017 03:59:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.017 03:59:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.017 03:59:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.017 03:59:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:42.017 03:59:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:42.017 03:59:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.017 03:59:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.017 03:59:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.017 03:59:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.017 03:59:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:42.017 03:59:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.017 03:59:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.017 03:59:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.017 03:59:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.017 03:59:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.017 03:59:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.017 03:59:56 -- paths/export.sh@5 -- # export PATH 00:08:42.017 03:59:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.017 03:59:56 -- nvmf/common.sh@47 -- # : 0 00:08:42.017 03:59:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.017 03:59:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.017 03:59:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.017 03:59:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.017 03:59:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.017 03:59:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.017 03:59:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.017 03:59:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.017 03:59:56 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:42.017 03:59:56 -- target/multitarget.sh@15 -- # nvmftestinit 00:08:42.017 03:59:56 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:42.017 03:59:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.017 03:59:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:42.017 03:59:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:42.017 03:59:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:42.017 03:59:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.017 03:59:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.017 03:59:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.017 03:59:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:42.017 03:59:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:42.017 03:59:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.017 03:59:56 -- common/autotest_common.sh@10 -- # set +x 00:08:47.302 04:00:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:47.302 04:00:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:47.302 04:00:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:47.302 04:00:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:47.302 04:00:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:47.302 04:00:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:47.302 04:00:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:47.302 04:00:01 -- nvmf/common.sh@295 -- # net_devs=() 00:08:47.302 04:00:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:47.302 04:00:01 -- nvmf/common.sh@296 -- # e810=() 00:08:47.302 04:00:01 -- nvmf/common.sh@296 -- # local -ga e810 00:08:47.302 04:00:01 -- nvmf/common.sh@297 -- # x722=() 00:08:47.302 04:00:01 -- nvmf/common.sh@297 -- # local -ga x722 00:08:47.302 04:00:01 -- nvmf/common.sh@298 -- # mlx=() 00:08:47.302 04:00:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:47.302 04:00:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.302 04:00:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.302 04:00:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.302 04:00:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.302 04:00:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.302 04:00:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.302 04:00:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.302 04:00:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.302 04:00:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.302 04:00:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.302 04:00:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.302 04:00:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:47.302 04:00:01 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:47.302 04:00:01 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:47.302 04:00:01 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:47.302 04:00:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:47.302 04:00:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.302 04:00:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:47.302 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:47.302 04:00:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.302 04:00:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.302 04:00:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:47.302 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:47.302 04:00:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.302 04:00:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:47.302 04:00:01 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.302 04:00:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.302 04:00:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:47.302 04:00:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.302 04:00:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:47.302 Found net devices under 0000:18:00.0: mlx_0_0 00:08:47.302 04:00:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.302 04:00:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.302 04:00:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.302 04:00:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:47.302 04:00:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.302 04:00:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:47.302 Found net devices under 0000:18:00.1: mlx_0_1 00:08:47.302 04:00:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.302 04:00:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:47.302 04:00:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:47.302 04:00:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:47.302 04:00:01 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:47.302 04:00:01 -- nvmf/common.sh@58 -- # uname 00:08:47.302 04:00:01 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:47.302 04:00:01 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:47.302 04:00:01 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:47.302 04:00:01 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:47.302 04:00:01 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:47.302 04:00:01 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:47.302 04:00:01 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:47.302 04:00:01 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:47.302 04:00:01 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:47.302 04:00:01 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:47.302 04:00:01 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:47.302 04:00:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.302 04:00:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:47.302 04:00:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:47.302 04:00:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.302 04:00:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:47.302 04:00:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.302 04:00:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.302 04:00:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:47.302 04:00:01 -- nvmf/common.sh@105 -- # continue 2 00:08:47.302 04:00:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.302 04:00:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.302 04:00:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.302 04:00:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:47.302 04:00:01 -- nvmf/common.sh@105 -- # continue 2 00:08:47.302 04:00:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:47.302 04:00:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:47.302 04:00:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:47.302 04:00:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:47.302 04:00:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.302 04:00:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.302 04:00:01 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:47.302 04:00:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:47.302 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.302 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:47.302 altname enp24s0f0np0 00:08:47.302 altname ens785f0np0 00:08:47.302 inet 192.168.100.8/24 scope global mlx_0_0 00:08:47.302 valid_lft forever preferred_lft forever 00:08:47.302 04:00:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:47.302 04:00:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:47.302 04:00:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:47.302 04:00:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:47.302 04:00:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.302 04:00:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.302 04:00:01 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:47.302 04:00:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:47.302 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.302 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:47.302 altname enp24s0f1np1 00:08:47.302 altname ens785f1np1 00:08:47.302 inet 192.168.100.9/24 scope global mlx_0_1 00:08:47.302 valid_lft forever preferred_lft forever 00:08:47.302 04:00:01 -- nvmf/common.sh@411 -- # return 0 00:08:47.302 04:00:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:47.302 04:00:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:47.302 04:00:01 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:47.302 04:00:01 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:47.302 04:00:01 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:47.302 04:00:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.302 04:00:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:47.302 04:00:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:47.302 04:00:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.302 04:00:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:47.302 04:00:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.303 04:00:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.303 04:00:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.303 04:00:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:47.303 04:00:01 -- nvmf/common.sh@105 -- # continue 2 00:08:47.303 04:00:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.303 04:00:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.303 04:00:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.303 04:00:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.303 04:00:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.303 04:00:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:47.303 04:00:01 -- nvmf/common.sh@105 -- # continue 2 00:08:47.303 04:00:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:47.303 04:00:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:47.303 04:00:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:47.303 04:00:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.303 04:00:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:47.303 04:00:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.303 04:00:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:47.303 04:00:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:47.303 04:00:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:47.303 04:00:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.303 04:00:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:47.303 04:00:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.303 04:00:01 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:47.303 192.168.100.9' 00:08:47.303 04:00:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:47.303 192.168.100.9' 00:08:47.303 04:00:01 -- nvmf/common.sh@446 -- # head -n 1 00:08:47.303 04:00:01 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:47.303 04:00:01 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:47.303 192.168.100.9' 00:08:47.303 04:00:01 -- nvmf/common.sh@447 -- # head -n 1 00:08:47.303 04:00:01 -- nvmf/common.sh@447 -- # tail -n +2 00:08:47.303 04:00:01 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:47.303 04:00:01 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:47.303 04:00:01 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:47.303 04:00:01 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:47.303 04:00:01 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:47.303 04:00:01 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:47.303 04:00:01 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:47.303 04:00:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:47.303 04:00:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:47.303 04:00:01 -- common/autotest_common.sh@10 -- # set +x 00:08:47.303 04:00:01 -- nvmf/common.sh@470 -- # nvmfpid=182814 00:08:47.303 04:00:01 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.303 04:00:01 -- nvmf/common.sh@471 -- # waitforlisten 182814 00:08:47.303 04:00:01 -- common/autotest_common.sh@817 -- # '[' -z 182814 ']' 00:08:47.303 04:00:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.303 04:00:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:47.303 04:00:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.303 04:00:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:47.303 04:00:01 -- common/autotest_common.sh@10 -- # set +x 00:08:47.303 [2024-04-19 04:00:01.696708] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:08:47.303 [2024-04-19 04:00:01.696755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.303 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.303 [2024-04-19 04:00:01.746458] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.303 [2024-04-19 04:00:01.819618] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.303 [2024-04-19 04:00:01.819653] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.303 [2024-04-19 04:00:01.819660] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.303 [2024-04-19 04:00:01.819665] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.303 [2024-04-19 04:00:01.819670] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.303 [2024-04-19 04:00:01.819709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.303 [2024-04-19 04:00:01.819802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.303 [2024-04-19 04:00:01.819900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.303 [2024-04-19 04:00:01.819902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.239 04:00:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:48.239 04:00:02 -- common/autotest_common.sh@850 -- # return 0 00:08:48.239 04:00:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:48.239 04:00:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:48.239 04:00:02 -- common/autotest_common.sh@10 -- # set +x 00:08:48.239 04:00:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.239 04:00:02 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:48.239 04:00:02 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:48.239 04:00:02 -- target/multitarget.sh@21 -- # jq length 00:08:48.239 04:00:02 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:48.239 04:00:02 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:48.239 "nvmf_tgt_1" 00:08:48.239 04:00:02 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:48.496 "nvmf_tgt_2" 00:08:48.496 04:00:02 -- target/multitarget.sh@28 -- # jq length 00:08:48.496 04:00:02 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:48.496 04:00:02 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:48.496 04:00:02 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:48.496 true 00:08:48.496 04:00:02 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:48.761 true 00:08:48.761 04:00:03 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:48.761 04:00:03 -- target/multitarget.sh@35 -- # jq length 00:08:48.761 04:00:03 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:48.761 04:00:03 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:48.761 04:00:03 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:48.761 04:00:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:48.761 04:00:03 -- nvmf/common.sh@117 -- # sync 00:08:48.761 04:00:03 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:48.761 04:00:03 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:48.761 04:00:03 -- nvmf/common.sh@120 -- # set +e 00:08:48.761 04:00:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.761 04:00:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:48.761 rmmod nvme_rdma 00:08:48.761 rmmod nvme_fabrics 00:08:48.761 04:00:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.761 04:00:03 -- nvmf/common.sh@124 -- # set -e 00:08:48.761 04:00:03 -- nvmf/common.sh@125 -- # return 0 00:08:48.761 04:00:03 -- nvmf/common.sh@478 -- # '[' -n 182814 ']' 00:08:48.761 04:00:03 -- nvmf/common.sh@479 -- # killprocess 182814 00:08:48.761 04:00:03 -- common/autotest_common.sh@936 -- # '[' -z 182814 ']' 00:08:48.761 04:00:03 -- common/autotest_common.sh@940 -- # kill -0 182814 00:08:48.761 04:00:03 -- common/autotest_common.sh@941 -- # uname 00:08:48.761 04:00:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:48.761 04:00:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 182814 00:08:48.761 04:00:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:48.761 04:00:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:48.761 04:00:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 182814' 00:08:48.761 killing process with pid 182814 00:08:48.761 04:00:03 -- common/autotest_common.sh@955 -- # kill 182814 00:08:48.761 04:00:03 -- common/autotest_common.sh@960 -- # wait 182814 00:08:49.029 04:00:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:49.029 04:00:03 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:49.030 00:08:49.030 real 0m7.201s 00:08:49.030 user 0m8.783s 00:08:49.030 sys 0m4.356s 00:08:49.030 04:00:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:49.030 04:00:03 -- common/autotest_common.sh@10 -- # set +x 00:08:49.030 ************************************ 00:08:49.030 END TEST nvmf_multitarget 00:08:49.030 ************************************ 00:08:49.030 04:00:03 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:49.030 04:00:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:49.030 04:00:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.030 04:00:03 -- common/autotest_common.sh@10 -- # set +x 00:08:49.290 ************************************ 00:08:49.290 START TEST nvmf_rpc 00:08:49.290 ************************************ 00:08:49.290 04:00:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:49.290 * Looking for test storage... 00:08:49.290 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:49.290 04:00:03 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.290 04:00:03 -- nvmf/common.sh@7 -- # uname -s 00:08:49.290 04:00:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.290 04:00:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.290 04:00:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.290 04:00:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.290 04:00:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.290 04:00:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.290 04:00:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.290 04:00:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.290 04:00:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.290 04:00:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.290 04:00:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:49.290 04:00:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:49.290 04:00:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.290 04:00:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.290 04:00:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.290 04:00:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.290 04:00:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:49.290 04:00:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.290 04:00:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.290 04:00:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.290 04:00:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.290 04:00:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.290 04:00:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.290 04:00:03 -- paths/export.sh@5 -- # export PATH 00:08:49.290 04:00:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.290 04:00:03 -- nvmf/common.sh@47 -- # : 0 00:08:49.290 04:00:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.290 04:00:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.290 04:00:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.290 04:00:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.290 04:00:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.290 04:00:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.290 04:00:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.290 04:00:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.290 04:00:03 -- target/rpc.sh@11 -- # loops=5 00:08:49.290 04:00:03 -- target/rpc.sh@23 -- # nvmftestinit 00:08:49.290 04:00:03 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:49.290 04:00:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.290 04:00:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:49.290 04:00:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:49.290 04:00:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:49.290 04:00:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.290 04:00:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.290 04:00:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.290 04:00:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:49.290 04:00:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:49.290 04:00:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.290 04:00:03 -- common/autotest_common.sh@10 -- # set +x 00:08:54.604 04:00:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:54.604 04:00:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.604 04:00:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.604 04:00:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.604 04:00:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.604 04:00:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.604 04:00:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.604 04:00:09 -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.604 04:00:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.604 04:00:09 -- nvmf/common.sh@296 -- # e810=() 00:08:54.604 04:00:09 -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.604 04:00:09 -- nvmf/common.sh@297 -- # x722=() 00:08:54.604 04:00:09 -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.604 04:00:09 -- nvmf/common.sh@298 -- # mlx=() 00:08:54.604 04:00:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.604 04:00:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.604 04:00:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.604 04:00:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.604 04:00:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.604 04:00:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.604 04:00:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.604 04:00:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.604 04:00:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.604 04:00:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.604 04:00:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.604 04:00:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.604 04:00:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.604 04:00:09 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:54.604 04:00:09 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:54.604 04:00:09 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:54.604 04:00:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.604 04:00:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.604 04:00:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:54.604 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:54.604 04:00:09 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.604 04:00:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.604 04:00:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:54.604 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:54.604 04:00:09 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:54.604 04:00:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.604 04:00:09 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:54.604 04:00:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.604 04:00:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.604 04:00:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:54.604 04:00:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.604 04:00:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:54.604 Found net devices under 0000:18:00.0: mlx_0_0 00:08:54.604 04:00:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.604 04:00:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.604 04:00:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.604 04:00:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:54.605 04:00:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.605 04:00:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:54.605 Found net devices under 0000:18:00.1: mlx_0_1 00:08:54.605 04:00:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.605 04:00:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:54.605 04:00:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:54.605 04:00:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:54.605 04:00:09 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:54.605 04:00:09 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:54.605 04:00:09 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:54.605 04:00:09 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:54.605 04:00:09 -- nvmf/common.sh@58 -- # uname 00:08:54.605 04:00:09 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:54.605 04:00:09 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:54.605 04:00:09 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:54.605 04:00:09 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:54.605 04:00:09 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:54.605 04:00:09 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:54.605 04:00:09 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:54.605 04:00:09 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:54.605 04:00:09 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:54.605 04:00:09 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:54.605 04:00:09 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:54.605 04:00:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.605 04:00:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:54.605 04:00:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:54.605 04:00:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.605 04:00:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:54.605 04:00:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:54.605 04:00:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.605 04:00:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.605 04:00:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:54.605 04:00:09 -- nvmf/common.sh@105 -- # continue 2 00:08:54.605 04:00:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:54.605 04:00:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.605 04:00:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.605 04:00:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.605 04:00:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.605 04:00:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:54.605 04:00:09 -- nvmf/common.sh@105 -- # continue 2 00:08:54.605 04:00:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:54.605 04:00:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:54.605 04:00:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:54.605 04:00:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:54.605 04:00:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:54.605 04:00:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:54.605 04:00:09 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:54.605 04:00:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:54.605 04:00:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:54.605 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.605 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:54.605 altname enp24s0f0np0 00:08:54.605 altname ens785f0np0 00:08:54.605 inet 192.168.100.8/24 scope global mlx_0_0 00:08:54.605 valid_lft forever preferred_lft forever 00:08:54.605 04:00:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:54.605 04:00:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:54.605 04:00:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:54.605 04:00:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:54.605 04:00:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:54.605 04:00:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:54.605 04:00:09 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:54.605 04:00:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:54.605 04:00:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:54.865 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:54.865 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:54.865 altname enp24s0f1np1 00:08:54.865 altname ens785f1np1 00:08:54.865 inet 192.168.100.9/24 scope global mlx_0_1 00:08:54.865 valid_lft forever preferred_lft forever 00:08:54.865 04:00:09 -- nvmf/common.sh@411 -- # return 0 00:08:54.865 04:00:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:54.865 04:00:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:54.865 04:00:09 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:54.865 04:00:09 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:54.865 04:00:09 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:54.865 04:00:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:54.865 04:00:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:54.865 04:00:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:54.865 04:00:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:54.865 04:00:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:54.865 04:00:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:54.865 04:00:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.865 04:00:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:54.865 04:00:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:54.865 04:00:09 -- nvmf/common.sh@105 -- # continue 2 00:08:54.865 04:00:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:54.865 04:00:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.865 04:00:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:54.865 04:00:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:54.865 04:00:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:54.865 04:00:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:54.865 04:00:09 -- nvmf/common.sh@105 -- # continue 2 00:08:54.865 04:00:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:54.865 04:00:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:54.866 04:00:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:54.866 04:00:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:54.866 04:00:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:54.866 04:00:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:54.866 04:00:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:54.866 04:00:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:54.866 04:00:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:54.866 04:00:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:54.866 04:00:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:54.866 04:00:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:54.866 04:00:09 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:54.866 192.168.100.9' 00:08:54.866 04:00:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:54.866 192.168.100.9' 00:08:54.866 04:00:09 -- nvmf/common.sh@446 -- # head -n 1 00:08:54.866 04:00:09 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:54.866 04:00:09 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:54.866 192.168.100.9' 00:08:54.866 04:00:09 -- nvmf/common.sh@447 -- # tail -n +2 00:08:54.866 04:00:09 -- nvmf/common.sh@447 -- # head -n 1 00:08:54.866 04:00:09 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:54.866 04:00:09 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:54.866 04:00:09 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:54.866 04:00:09 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:54.866 04:00:09 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:54.866 04:00:09 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:54.866 04:00:09 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:54.866 04:00:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:54.866 04:00:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:54.866 04:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:54.866 04:00:09 -- nvmf/common.sh@470 -- # nvmfpid=186990 00:08:54.866 04:00:09 -- nvmf/common.sh@471 -- # waitforlisten 186990 00:08:54.866 04:00:09 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:54.866 04:00:09 -- common/autotest_common.sh@817 -- # '[' -z 186990 ']' 00:08:54.866 04:00:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.866 04:00:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:54.866 04:00:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.866 04:00:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:54.866 04:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:54.866 [2024-04-19 04:00:09.279333] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:08:54.866 [2024-04-19 04:00:09.279379] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.866 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.866 [2024-04-19 04:00:09.331971] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.126 [2024-04-19 04:00:09.402517] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.126 [2024-04-19 04:00:09.402555] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.126 [2024-04-19 04:00:09.402561] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.126 [2024-04-19 04:00:09.402566] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.126 [2024-04-19 04:00:09.402571] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.126 [2024-04-19 04:00:09.402629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.126 [2024-04-19 04:00:09.402720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.126 [2024-04-19 04:00:09.402784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.126 [2024-04-19 04:00:09.402785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.695 04:00:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:55.695 04:00:10 -- common/autotest_common.sh@850 -- # return 0 00:08:55.695 04:00:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:55.695 04:00:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:55.695 04:00:10 -- common/autotest_common.sh@10 -- # set +x 00:08:55.695 04:00:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.695 04:00:10 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:55.695 04:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.695 04:00:10 -- common/autotest_common.sh@10 -- # set +x 00:08:55.695 04:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.695 04:00:10 -- target/rpc.sh@26 -- # stats='{ 00:08:55.695 "tick_rate": 2700000000, 00:08:55.695 "poll_groups": [ 00:08:55.695 { 00:08:55.695 "name": "nvmf_tgt_poll_group_0", 00:08:55.695 "admin_qpairs": 0, 00:08:55.695 "io_qpairs": 0, 00:08:55.695 "current_admin_qpairs": 0, 00:08:55.695 "current_io_qpairs": 0, 00:08:55.695 "pending_bdev_io": 0, 00:08:55.695 "completed_nvme_io": 0, 00:08:55.695 "transports": [] 00:08:55.695 }, 00:08:55.695 { 00:08:55.695 "name": "nvmf_tgt_poll_group_1", 00:08:55.695 "admin_qpairs": 0, 00:08:55.695 "io_qpairs": 0, 00:08:55.695 "current_admin_qpairs": 0, 00:08:55.695 "current_io_qpairs": 0, 00:08:55.695 "pending_bdev_io": 0, 00:08:55.695 "completed_nvme_io": 0, 00:08:55.695 "transports": [] 00:08:55.695 }, 00:08:55.695 { 00:08:55.695 "name": "nvmf_tgt_poll_group_2", 00:08:55.695 "admin_qpairs": 0, 00:08:55.695 "io_qpairs": 0, 00:08:55.695 "current_admin_qpairs": 0, 00:08:55.695 "current_io_qpairs": 0, 00:08:55.695 "pending_bdev_io": 0, 00:08:55.695 "completed_nvme_io": 0, 00:08:55.695 "transports": [] 00:08:55.695 }, 00:08:55.695 { 00:08:55.695 "name": "nvmf_tgt_poll_group_3", 00:08:55.695 "admin_qpairs": 0, 00:08:55.695 "io_qpairs": 0, 00:08:55.695 "current_admin_qpairs": 0, 00:08:55.695 "current_io_qpairs": 0, 00:08:55.695 "pending_bdev_io": 0, 00:08:55.695 "completed_nvme_io": 0, 00:08:55.695 "transports": [] 00:08:55.695 } 00:08:55.695 ] 00:08:55.695 }' 00:08:55.695 04:00:10 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:55.695 04:00:10 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:55.695 04:00:10 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:55.695 04:00:10 -- target/rpc.sh@15 -- # wc -l 00:08:55.695 04:00:10 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:55.695 04:00:10 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:55.695 04:00:10 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:55.695 04:00:10 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:55.695 04:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.695 04:00:10 -- common/autotest_common.sh@10 -- # set +x 00:08:55.955 [2024-04-19 04:00:10.224256] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ebc6f0/0x1ec0be0) succeed. 00:08:55.955 [2024-04-19 04:00:10.233491] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ebdce0/0x1f02270) succeed. 00:08:55.955 04:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.955 04:00:10 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:55.955 04:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.955 04:00:10 -- common/autotest_common.sh@10 -- # set +x 00:08:55.955 04:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.955 04:00:10 -- target/rpc.sh@33 -- # stats='{ 00:08:55.955 "tick_rate": 2700000000, 00:08:55.955 "poll_groups": [ 00:08:55.955 { 00:08:55.955 "name": "nvmf_tgt_poll_group_0", 00:08:55.955 "admin_qpairs": 0, 00:08:55.955 "io_qpairs": 0, 00:08:55.955 "current_admin_qpairs": 0, 00:08:55.955 "current_io_qpairs": 0, 00:08:55.955 "pending_bdev_io": 0, 00:08:55.955 "completed_nvme_io": 0, 00:08:55.955 "transports": [ 00:08:55.955 { 00:08:55.955 "trtype": "RDMA", 00:08:55.955 "pending_data_buffer": 0, 00:08:55.955 "devices": [ 00:08:55.955 { 00:08:55.955 "name": "mlx5_0", 00:08:55.955 "polls": 15047, 00:08:55.955 "idle_polls": 15047, 00:08:55.955 "completions": 0, 00:08:55.955 "requests": 0, 00:08:55.955 "request_latency": 0, 00:08:55.955 "pending_free_request": 0, 00:08:55.955 "pending_rdma_read": 0, 00:08:55.955 "pending_rdma_write": 0, 00:08:55.955 "pending_rdma_send": 0, 00:08:55.955 "total_send_wrs": 0, 00:08:55.955 "send_doorbell_updates": 0, 00:08:55.955 "total_recv_wrs": 4096, 00:08:55.955 "recv_doorbell_updates": 1 00:08:55.955 }, 00:08:55.955 { 00:08:55.955 "name": "mlx5_1", 00:08:55.955 "polls": 15047, 00:08:55.955 "idle_polls": 15047, 00:08:55.955 "completions": 0, 00:08:55.955 "requests": 0, 00:08:55.955 "request_latency": 0, 00:08:55.955 "pending_free_request": 0, 00:08:55.955 "pending_rdma_read": 0, 00:08:55.955 "pending_rdma_write": 0, 00:08:55.955 "pending_rdma_send": 0, 00:08:55.955 "total_send_wrs": 0, 00:08:55.955 "send_doorbell_updates": 0, 00:08:55.955 "total_recv_wrs": 4096, 00:08:55.955 "recv_doorbell_updates": 1 00:08:55.955 } 00:08:55.955 ] 00:08:55.955 } 00:08:55.955 ] 00:08:55.955 }, 00:08:55.955 { 00:08:55.955 "name": "nvmf_tgt_poll_group_1", 00:08:55.955 "admin_qpairs": 0, 00:08:55.955 "io_qpairs": 0, 00:08:55.955 "current_admin_qpairs": 0, 00:08:55.955 "current_io_qpairs": 0, 00:08:55.955 "pending_bdev_io": 0, 00:08:55.955 "completed_nvme_io": 0, 00:08:55.955 "transports": [ 00:08:55.955 { 00:08:55.955 "trtype": "RDMA", 00:08:55.955 "pending_data_buffer": 0, 00:08:55.955 "devices": [ 00:08:55.955 { 00:08:55.955 "name": "mlx5_0", 00:08:55.955 "polls": 9566, 00:08:55.955 "idle_polls": 9566, 00:08:55.955 "completions": 0, 00:08:55.955 "requests": 0, 00:08:55.955 "request_latency": 0, 00:08:55.955 "pending_free_request": 0, 00:08:55.955 "pending_rdma_read": 0, 00:08:55.955 "pending_rdma_write": 0, 00:08:55.955 "pending_rdma_send": 0, 00:08:55.955 "total_send_wrs": 0, 00:08:55.955 "send_doorbell_updates": 0, 00:08:55.955 "total_recv_wrs": 4096, 00:08:55.955 "recv_doorbell_updates": 1 00:08:55.955 }, 00:08:55.955 { 00:08:55.955 "name": "mlx5_1", 00:08:55.955 "polls": 9566, 00:08:55.955 "idle_polls": 9566, 00:08:55.955 "completions": 0, 00:08:55.955 "requests": 0, 00:08:55.955 "request_latency": 0, 00:08:55.955 "pending_free_request": 0, 00:08:55.955 "pending_rdma_read": 0, 00:08:55.955 "pending_rdma_write": 0, 00:08:55.955 "pending_rdma_send": 0, 00:08:55.955 "total_send_wrs": 0, 00:08:55.955 "send_doorbell_updates": 0, 00:08:55.955 "total_recv_wrs": 4096, 00:08:55.955 "recv_doorbell_updates": 1 00:08:55.955 } 00:08:55.955 ] 00:08:55.955 } 00:08:55.955 ] 00:08:55.955 }, 00:08:55.955 { 00:08:55.955 "name": "nvmf_tgt_poll_group_2", 00:08:55.955 "admin_qpairs": 0, 00:08:55.955 "io_qpairs": 0, 00:08:55.955 "current_admin_qpairs": 0, 00:08:55.955 "current_io_qpairs": 0, 00:08:55.955 "pending_bdev_io": 0, 00:08:55.955 "completed_nvme_io": 0, 00:08:55.955 "transports": [ 00:08:55.955 { 00:08:55.955 "trtype": "RDMA", 00:08:55.955 "pending_data_buffer": 0, 00:08:55.955 "devices": [ 00:08:55.955 { 00:08:55.955 "name": "mlx5_0", 00:08:55.955 "polls": 5414, 00:08:55.955 "idle_polls": 5414, 00:08:55.955 "completions": 0, 00:08:55.955 "requests": 0, 00:08:55.955 "request_latency": 0, 00:08:55.955 "pending_free_request": 0, 00:08:55.955 "pending_rdma_read": 0, 00:08:55.955 "pending_rdma_write": 0, 00:08:55.955 "pending_rdma_send": 0, 00:08:55.955 "total_send_wrs": 0, 00:08:55.955 "send_doorbell_updates": 0, 00:08:55.955 "total_recv_wrs": 4096, 00:08:55.955 "recv_doorbell_updates": 1 00:08:55.955 }, 00:08:55.955 { 00:08:55.955 "name": "mlx5_1", 00:08:55.955 "polls": 5414, 00:08:55.955 "idle_polls": 5414, 00:08:55.955 "completions": 0, 00:08:55.955 "requests": 0, 00:08:55.955 "request_latency": 0, 00:08:55.955 "pending_free_request": 0, 00:08:55.955 "pending_rdma_read": 0, 00:08:55.955 "pending_rdma_write": 0, 00:08:55.955 "pending_rdma_send": 0, 00:08:55.956 "total_send_wrs": 0, 00:08:55.956 "send_doorbell_updates": 0, 00:08:55.956 "total_recv_wrs": 4096, 00:08:55.956 "recv_doorbell_updates": 1 00:08:55.956 } 00:08:55.956 ] 00:08:55.956 } 00:08:55.956 ] 00:08:55.956 }, 00:08:55.956 { 00:08:55.956 "name": "nvmf_tgt_poll_group_3", 00:08:55.956 "admin_qpairs": 0, 00:08:55.956 "io_qpairs": 0, 00:08:55.956 "current_admin_qpairs": 0, 00:08:55.956 "current_io_qpairs": 0, 00:08:55.956 "pending_bdev_io": 0, 00:08:55.956 "completed_nvme_io": 0, 00:08:55.956 "transports": [ 00:08:55.956 { 00:08:55.956 "trtype": "RDMA", 00:08:55.956 "pending_data_buffer": 0, 00:08:55.956 "devices": [ 00:08:55.956 { 00:08:55.956 "name": "mlx5_0", 00:08:55.956 "polls": 934, 00:08:55.956 "idle_polls": 934, 00:08:55.956 "completions": 0, 00:08:55.956 "requests": 0, 00:08:55.956 "request_latency": 0, 00:08:55.956 "pending_free_request": 0, 00:08:55.956 "pending_rdma_read": 0, 00:08:55.956 "pending_rdma_write": 0, 00:08:55.956 "pending_rdma_send": 0, 00:08:55.956 "total_send_wrs": 0, 00:08:55.956 "send_doorbell_updates": 0, 00:08:55.956 "total_recv_wrs": 4096, 00:08:55.956 "recv_doorbell_updates": 1 00:08:55.956 }, 00:08:55.956 { 00:08:55.956 "name": "mlx5_1", 00:08:55.956 "polls": 934, 00:08:55.956 "idle_polls": 934, 00:08:55.956 "completions": 0, 00:08:55.956 "requests": 0, 00:08:55.956 "request_latency": 0, 00:08:55.956 "pending_free_request": 0, 00:08:55.956 "pending_rdma_read": 0, 00:08:55.956 "pending_rdma_write": 0, 00:08:55.956 "pending_rdma_send": 0, 00:08:55.956 "total_send_wrs": 0, 00:08:55.956 "send_doorbell_updates": 0, 00:08:55.956 "total_recv_wrs": 4096, 00:08:55.956 "recv_doorbell_updates": 1 00:08:55.956 } 00:08:55.956 ] 00:08:55.956 } 00:08:55.956 ] 00:08:55.956 } 00:08:55.956 ] 00:08:55.956 }' 00:08:55.956 04:00:10 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:55.956 04:00:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:55.956 04:00:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:55.956 04:00:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:55.956 04:00:10 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:55.956 04:00:10 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:55.956 04:00:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:55.956 04:00:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:55.956 04:00:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:55.956 04:00:10 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:55.956 04:00:10 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:08:55.956 04:00:10 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:08:55.956 04:00:10 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:08:55.956 04:00:10 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:08:55.956 04:00:10 -- target/rpc.sh@15 -- # wc -l 00:08:56.216 04:00:10 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:08:56.216 04:00:10 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:08:56.216 04:00:10 -- target/rpc.sh@41 -- # transport_type=RDMA 00:08:56.216 04:00:10 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:08:56.216 04:00:10 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:08:56.216 04:00:10 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:08:56.216 04:00:10 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:08:56.216 04:00:10 -- target/rpc.sh@15 -- # wc -l 00:08:56.216 04:00:10 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:08:56.216 04:00:10 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:56.216 04:00:10 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:56.216 04:00:10 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:56.216 04:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.216 04:00:10 -- common/autotest_common.sh@10 -- # set +x 00:08:56.216 Malloc1 00:08:56.216 04:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.216 04:00:10 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:56.216 04:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.216 04:00:10 -- common/autotest_common.sh@10 -- # set +x 00:08:56.216 04:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.216 04:00:10 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.216 04:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.216 04:00:10 -- common/autotest_common.sh@10 -- # set +x 00:08:56.216 04:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.216 04:00:10 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:56.216 04:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.216 04:00:10 -- common/autotest_common.sh@10 -- # set +x 00:08:56.216 04:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.216 04:00:10 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:56.216 04:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.216 04:00:10 -- common/autotest_common.sh@10 -- # set +x 00:08:56.216 [2024-04-19 04:00:10.649273] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:56.216 04:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.216 04:00:10 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:56.216 04:00:10 -- common/autotest_common.sh@638 -- # local es=0 00:08:56.216 04:00:10 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:56.216 04:00:10 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:56.216 04:00:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:56.216 04:00:10 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:56.216 04:00:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:56.216 04:00:10 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:56.216 04:00:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:56.216 04:00:10 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:56.216 04:00:10 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:56.216 04:00:10 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:08:56.216 [2024-04-19 04:00:10.689036] ctrlr.c: 778:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:08:56.216 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:56.216 could not add new controller: failed to write to nvme-fabrics device 00:08:56.216 04:00:10 -- common/autotest_common.sh@641 -- # es=1 00:08:56.216 04:00:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:56.216 04:00:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:56.216 04:00:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:56.216 04:00:10 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:56.216 04:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:56.216 04:00:10 -- common/autotest_common.sh@10 -- # set +x 00:08:56.216 04:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:56.216 04:00:10 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:57.599 04:00:11 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:57.599 04:00:11 -- common/autotest_common.sh@1184 -- # local i=0 00:08:57.599 04:00:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:57.599 04:00:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:57.599 04:00:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:59.506 04:00:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:59.506 04:00:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:59.506 04:00:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:59.506 04:00:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:59.506 04:00:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:59.506 04:00:13 -- common/autotest_common.sh@1194 -- # return 0 00:08:59.506 04:00:13 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.446 04:00:14 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.446 04:00:14 -- common/autotest_common.sh@1205 -- # local i=0 00:09:00.446 04:00:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:00.446 04:00:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.446 04:00:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:00.446 04:00:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.446 04:00:14 -- common/autotest_common.sh@1217 -- # return 0 00:09:00.446 04:00:14 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:00.446 04:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.446 04:00:14 -- common/autotest_common.sh@10 -- # set +x 00:09:00.446 04:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.446 04:00:14 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:00.446 04:00:14 -- common/autotest_common.sh@638 -- # local es=0 00:09:00.446 04:00:14 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:00.446 04:00:14 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:00.446 04:00:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:00.446 04:00:14 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:00.446 04:00:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:00.446 04:00:14 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:00.446 04:00:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:00.446 04:00:14 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:00.446 04:00:14 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:00.446 04:00:14 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:00.446 [2024-04-19 04:00:14.750468] ctrlr.c: 778:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:09:00.446 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:00.446 could not add new controller: failed to write to nvme-fabrics device 00:09:00.446 04:00:14 -- common/autotest_common.sh@641 -- # es=1 00:09:00.446 04:00:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:00.446 04:00:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:00.446 04:00:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:00.446 04:00:14 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:00.446 04:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.446 04:00:14 -- common/autotest_common.sh@10 -- # set +x 00:09:00.446 04:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.446 04:00:14 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:01.389 04:00:15 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.390 04:00:15 -- common/autotest_common.sh@1184 -- # local i=0 00:09:01.390 04:00:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.390 04:00:15 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:01.390 04:00:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:03.300 04:00:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:03.300 04:00:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:03.300 04:00:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:03.300 04:00:17 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:03.300 04:00:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:03.300 04:00:17 -- common/autotest_common.sh@1194 -- # return 0 00:09:03.300 04:00:17 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:04.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.239 04:00:18 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:04.239 04:00:18 -- common/autotest_common.sh@1205 -- # local i=0 00:09:04.239 04:00:18 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:04.239 04:00:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.239 04:00:18 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:04.239 04:00:18 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.239 04:00:18 -- common/autotest_common.sh@1217 -- # return 0 00:09:04.239 04:00:18 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.239 04:00:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.239 04:00:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.508 04:00:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.508 04:00:18 -- target/rpc.sh@81 -- # seq 1 5 00:09:04.508 04:00:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:04.508 04:00:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:04.508 04:00:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.508 04:00:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.508 04:00:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.508 04:00:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:04.508 04:00:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.508 04:00:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.508 [2024-04-19 04:00:18.791436] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:04.508 04:00:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.508 04:00:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:04.508 04:00:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.508 04:00:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.508 04:00:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.508 04:00:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:04.508 04:00:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:04.508 04:00:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.508 04:00:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:04.508 04:00:18 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:05.456 04:00:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.456 04:00:19 -- common/autotest_common.sh@1184 -- # local i=0 00:09:05.456 04:00:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.456 04:00:19 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:05.456 04:00:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:07.364 04:00:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:07.364 04:00:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:07.364 04:00:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.364 04:00:21 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:07.364 04:00:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.364 04:00:21 -- common/autotest_common.sh@1194 -- # return 0 00:09:07.364 04:00:21 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.307 04:00:22 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.307 04:00:22 -- common/autotest_common.sh@1205 -- # local i=0 00:09:08.307 04:00:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:08.307 04:00:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.307 04:00:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:08.307 04:00:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.307 04:00:22 -- common/autotest_common.sh@1217 -- # return 0 00:09:08.307 04:00:22 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.307 04:00:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.307 04:00:22 -- common/autotest_common.sh@10 -- # set +x 00:09:08.307 04:00:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.307 04:00:22 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.307 04:00:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.307 04:00:22 -- common/autotest_common.sh@10 -- # set +x 00:09:08.307 04:00:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.307 04:00:22 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:08.307 04:00:22 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:08.307 04:00:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.307 04:00:22 -- common/autotest_common.sh@10 -- # set +x 00:09:08.307 04:00:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.307 04:00:22 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:08.307 04:00:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.307 04:00:22 -- common/autotest_common.sh@10 -- # set +x 00:09:08.307 [2024-04-19 04:00:22.804900] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:08.307 04:00:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.307 04:00:22 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:08.307 04:00:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.307 04:00:22 -- common/autotest_common.sh@10 -- # set +x 00:09:08.307 04:00:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.307 04:00:22 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:08.307 04:00:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.307 04:00:22 -- common/autotest_common.sh@10 -- # set +x 00:09:08.307 04:00:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.307 04:00:22 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:09.689 04:00:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.689 04:00:23 -- common/autotest_common.sh@1184 -- # local i=0 00:09:09.689 04:00:23 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.689 04:00:23 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:09.689 04:00:23 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:11.600 04:00:25 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:11.600 04:00:25 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:11.600 04:00:25 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.600 04:00:25 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:11.600 04:00:25 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.600 04:00:25 -- common/autotest_common.sh@1194 -- # return 0 00:09:11.600 04:00:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:12.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.549 04:00:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:12.549 04:00:26 -- common/autotest_common.sh@1205 -- # local i=0 00:09:12.549 04:00:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:12.549 04:00:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.549 04:00:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:12.549 04:00:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.549 04:00:26 -- common/autotest_common.sh@1217 -- # return 0 00:09:12.549 04:00:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:12.549 04:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.549 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:09:12.549 04:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.549 04:00:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.549 04:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.549 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:09:12.549 04:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.549 04:00:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:12.549 04:00:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:12.549 04:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.549 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:09:12.549 04:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.549 04:00:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:12.549 04:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.549 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:09:12.549 [2024-04-19 04:00:26.795597] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:12.549 04:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.549 04:00:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:12.549 04:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.549 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:09:12.549 04:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.549 04:00:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:12.549 04:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.549 04:00:26 -- common/autotest_common.sh@10 -- # set +x 00:09:12.549 04:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.549 04:00:26 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:13.488 04:00:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.488 04:00:27 -- common/autotest_common.sh@1184 -- # local i=0 00:09:13.488 04:00:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.488 04:00:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:13.488 04:00:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:15.398 04:00:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:15.398 04:00:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:15.398 04:00:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.398 04:00:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:15.398 04:00:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.398 04:00:29 -- common/autotest_common.sh@1194 -- # return 0 00:09:15.398 04:00:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.342 04:00:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.342 04:00:30 -- common/autotest_common.sh@1205 -- # local i=0 00:09:16.342 04:00:30 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:16.342 04:00:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.342 04:00:30 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:16.342 04:00:30 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.342 04:00:30 -- common/autotest_common.sh@1217 -- # return 0 00:09:16.342 04:00:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.342 04:00:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.342 04:00:30 -- common/autotest_common.sh@10 -- # set +x 00:09:16.342 04:00:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.342 04:00:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.342 04:00:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.342 04:00:30 -- common/autotest_common.sh@10 -- # set +x 00:09:16.342 04:00:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.342 04:00:30 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:16.342 04:00:30 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:16.342 04:00:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.342 04:00:30 -- common/autotest_common.sh@10 -- # set +x 00:09:16.342 04:00:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.342 04:00:30 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:16.342 04:00:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.342 04:00:30 -- common/autotest_common.sh@10 -- # set +x 00:09:16.342 [2024-04-19 04:00:30.833055] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:16.342 04:00:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.342 04:00:30 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:16.342 04:00:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.342 04:00:30 -- common/autotest_common.sh@10 -- # set +x 00:09:16.342 04:00:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.342 04:00:30 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:16.342 04:00:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.342 04:00:30 -- common/autotest_common.sh@10 -- # set +x 00:09:16.342 04:00:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.342 04:00:30 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:17.724 04:00:31 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.724 04:00:31 -- common/autotest_common.sh@1184 -- # local i=0 00:09:17.724 04:00:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.724 04:00:31 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:17.724 04:00:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:19.631 04:00:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:19.631 04:00:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:19.631 04:00:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.631 04:00:33 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:19.631 04:00:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.631 04:00:33 -- common/autotest_common.sh@1194 -- # return 0 00:09:19.631 04:00:33 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.569 04:00:34 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.569 04:00:34 -- common/autotest_common.sh@1205 -- # local i=0 00:09:20.569 04:00:34 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:20.569 04:00:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.569 04:00:34 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:20.569 04:00:34 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.569 04:00:34 -- common/autotest_common.sh@1217 -- # return 0 00:09:20.569 04:00:34 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.569 04:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.569 04:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.569 04:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.569 04:00:34 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.569 04:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.569 04:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.569 04:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.569 04:00:34 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:20.569 04:00:34 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.569 04:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.569 04:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.569 04:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.569 04:00:34 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:20.569 04:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.569 04:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.569 [2024-04-19 04:00:34.879210] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:20.569 04:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.569 04:00:34 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:20.569 04:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.569 04:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.569 04:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.569 04:00:34 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.569 04:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.569 04:00:34 -- common/autotest_common.sh@10 -- # set +x 00:09:20.569 04:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.569 04:00:34 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:21.504 04:00:35 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.504 04:00:35 -- common/autotest_common.sh@1184 -- # local i=0 00:09:21.504 04:00:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.504 04:00:35 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:21.504 04:00:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:23.430 04:00:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:23.431 04:00:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:23.431 04:00:37 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.431 04:00:37 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:23.431 04:00:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.431 04:00:37 -- common/autotest_common.sh@1194 -- # return 0 00:09:23.431 04:00:37 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.370 04:00:38 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.370 04:00:38 -- common/autotest_common.sh@1205 -- # local i=0 00:09:24.370 04:00:38 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:24.370 04:00:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.370 04:00:38 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:24.371 04:00:38 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.371 04:00:38 -- common/autotest_common.sh@1217 -- # return 0 00:09:24.371 04:00:38 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:24.371 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.371 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.371 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.371 04:00:38 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.371 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.371 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.371 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.371 04:00:38 -- target/rpc.sh@99 -- # seq 1 5 00:09:24.371 04:00:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.371 04:00:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.371 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.371 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.371 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.371 04:00:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:24.371 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.371 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.371 [2024-04-19 04:00:38.895565] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:24.632 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.632 04:00:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.632 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.632 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.632 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.632 04:00:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.632 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.632 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.632 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.632 04:00:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.632 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.632 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.632 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.632 04:00:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.632 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.632 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.633 04:00:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.633 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:24.633 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 [2024-04-19 04:00:38.943707] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:24.633 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.633 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.633 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.633 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.633 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.633 04:00:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.633 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:24.633 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 [2024-04-19 04:00:38.991865] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:24.633 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.633 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.633 04:00:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 [2024-04-19 04:00:39.044052] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.633 04:00:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 [2024-04-19 04:00:39.092258] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.633 04:00:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.633 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.633 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.633 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.634 04:00:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.634 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.634 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.634 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.634 04:00:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.634 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.634 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.634 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.634 04:00:39 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:24.634 04:00:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.634 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:24.894 04:00:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.894 04:00:39 -- target/rpc.sh@110 -- # stats='{ 00:09:24.894 "tick_rate": 2700000000, 00:09:24.894 "poll_groups": [ 00:09:24.894 { 00:09:24.894 "name": "nvmf_tgt_poll_group_0", 00:09:24.894 "admin_qpairs": 2, 00:09:24.894 "io_qpairs": 27, 00:09:24.894 "current_admin_qpairs": 0, 00:09:24.894 "current_io_qpairs": 0, 00:09:24.894 "pending_bdev_io": 0, 00:09:24.894 "completed_nvme_io": 77, 00:09:24.894 "transports": [ 00:09:24.894 { 00:09:24.894 "trtype": "RDMA", 00:09:24.894 "pending_data_buffer": 0, 00:09:24.894 "devices": [ 00:09:24.894 { 00:09:24.894 "name": "mlx5_0", 00:09:24.894 "polls": 3783684, 00:09:24.894 "idle_polls": 3783439, 00:09:24.894 "completions": 265, 00:09:24.894 "requests": 132, 00:09:24.894 "request_latency": 22665692, 00:09:24.894 "pending_free_request": 0, 00:09:24.894 "pending_rdma_read": 0, 00:09:24.894 "pending_rdma_write": 0, 00:09:24.894 "pending_rdma_send": 0, 00:09:24.894 "total_send_wrs": 207, 00:09:24.894 "send_doorbell_updates": 121, 00:09:24.894 "total_recv_wrs": 4228, 00:09:24.894 "recv_doorbell_updates": 121 00:09:24.894 }, 00:09:24.894 { 00:09:24.894 "name": "mlx5_1", 00:09:24.894 "polls": 3783684, 00:09:24.894 "idle_polls": 3783684, 00:09:24.894 "completions": 0, 00:09:24.894 "requests": 0, 00:09:24.894 "request_latency": 0, 00:09:24.894 "pending_free_request": 0, 00:09:24.894 "pending_rdma_read": 0, 00:09:24.894 "pending_rdma_write": 0, 00:09:24.894 "pending_rdma_send": 0, 00:09:24.894 "total_send_wrs": 0, 00:09:24.894 "send_doorbell_updates": 0, 00:09:24.894 "total_recv_wrs": 4096, 00:09:24.894 "recv_doorbell_updates": 1 00:09:24.894 } 00:09:24.895 ] 00:09:24.895 } 00:09:24.895 ] 00:09:24.895 }, 00:09:24.895 { 00:09:24.895 "name": "nvmf_tgt_poll_group_1", 00:09:24.895 "admin_qpairs": 2, 00:09:24.895 "io_qpairs": 26, 00:09:24.895 "current_admin_qpairs": 0, 00:09:24.895 "current_io_qpairs": 0, 00:09:24.895 "pending_bdev_io": 0, 00:09:24.895 "completed_nvme_io": 126, 00:09:24.895 "transports": [ 00:09:24.895 { 00:09:24.895 "trtype": "RDMA", 00:09:24.895 "pending_data_buffer": 0, 00:09:24.895 "devices": [ 00:09:24.895 { 00:09:24.895 "name": "mlx5_0", 00:09:24.895 "polls": 3613685, 00:09:24.895 "idle_polls": 3613361, 00:09:24.895 "completions": 364, 00:09:24.895 "requests": 182, 00:09:24.895 "request_latency": 36445922, 00:09:24.895 "pending_free_request": 0, 00:09:24.895 "pending_rdma_read": 0, 00:09:24.895 "pending_rdma_write": 0, 00:09:24.895 "pending_rdma_send": 0, 00:09:24.895 "total_send_wrs": 308, 00:09:24.895 "send_doorbell_updates": 159, 00:09:24.895 "total_recv_wrs": 4278, 00:09:24.895 "recv_doorbell_updates": 160 00:09:24.895 }, 00:09:24.895 { 00:09:24.895 "name": "mlx5_1", 00:09:24.895 "polls": 3613685, 00:09:24.895 "idle_polls": 3613685, 00:09:24.895 "completions": 0, 00:09:24.895 "requests": 0, 00:09:24.895 "request_latency": 0, 00:09:24.895 "pending_free_request": 0, 00:09:24.895 "pending_rdma_read": 0, 00:09:24.895 "pending_rdma_write": 0, 00:09:24.895 "pending_rdma_send": 0, 00:09:24.895 "total_send_wrs": 0, 00:09:24.895 "send_doorbell_updates": 0, 00:09:24.895 "total_recv_wrs": 4096, 00:09:24.895 "recv_doorbell_updates": 1 00:09:24.895 } 00:09:24.895 ] 00:09:24.895 } 00:09:24.895 ] 00:09:24.895 }, 00:09:24.895 { 00:09:24.895 "name": "nvmf_tgt_poll_group_2", 00:09:24.895 "admin_qpairs": 1, 00:09:24.895 "io_qpairs": 26, 00:09:24.895 "current_admin_qpairs": 0, 00:09:24.895 "current_io_qpairs": 0, 00:09:24.895 "pending_bdev_io": 0, 00:09:24.895 "completed_nvme_io": 175, 00:09:24.895 "transports": [ 00:09:24.895 { 00:09:24.895 "trtype": "RDMA", 00:09:24.895 "pending_data_buffer": 0, 00:09:24.895 "devices": [ 00:09:24.895 { 00:09:24.895 "name": "mlx5_0", 00:09:24.895 "polls": 3710954, 00:09:24.895 "idle_polls": 3710611, 00:09:24.895 "completions": 407, 00:09:24.895 "requests": 203, 00:09:24.895 "request_latency": 50780110, 00:09:24.895 "pending_free_request": 0, 00:09:24.895 "pending_rdma_read": 0, 00:09:24.895 "pending_rdma_write": 0, 00:09:24.895 "pending_rdma_send": 0, 00:09:24.895 "total_send_wrs": 366, 00:09:24.895 "send_doorbell_updates": 166, 00:09:24.895 "total_recv_wrs": 4299, 00:09:24.895 "recv_doorbell_updates": 166 00:09:24.895 }, 00:09:24.895 { 00:09:24.895 "name": "mlx5_1", 00:09:24.895 "polls": 3710954, 00:09:24.895 "idle_polls": 3710954, 00:09:24.895 "completions": 0, 00:09:24.895 "requests": 0, 00:09:24.895 "request_latency": 0, 00:09:24.895 "pending_free_request": 0, 00:09:24.895 "pending_rdma_read": 0, 00:09:24.895 "pending_rdma_write": 0, 00:09:24.895 "pending_rdma_send": 0, 00:09:24.895 "total_send_wrs": 0, 00:09:24.895 "send_doorbell_updates": 0, 00:09:24.895 "total_recv_wrs": 4096, 00:09:24.895 "recv_doorbell_updates": 1 00:09:24.895 } 00:09:24.895 ] 00:09:24.895 } 00:09:24.895 ] 00:09:24.895 }, 00:09:24.895 { 00:09:24.895 "name": "nvmf_tgt_poll_group_3", 00:09:24.895 "admin_qpairs": 2, 00:09:24.895 "io_qpairs": 26, 00:09:24.895 "current_admin_qpairs": 0, 00:09:24.895 "current_io_qpairs": 0, 00:09:24.895 "pending_bdev_io": 0, 00:09:24.895 "completed_nvme_io": 77, 00:09:24.895 "transports": [ 00:09:24.895 { 00:09:24.895 "trtype": "RDMA", 00:09:24.895 "pending_data_buffer": 0, 00:09:24.895 "devices": [ 00:09:24.895 { 00:09:24.895 "name": "mlx5_0", 00:09:24.895 "polls": 2902627, 00:09:24.896 "idle_polls": 2902386, 00:09:24.896 "completions": 264, 00:09:24.896 "requests": 132, 00:09:24.896 "request_latency": 23898128, 00:09:24.896 "pending_free_request": 0, 00:09:24.896 "pending_rdma_read": 0, 00:09:24.896 "pending_rdma_write": 0, 00:09:24.896 "pending_rdma_send": 0, 00:09:24.896 "total_send_wrs": 208, 00:09:24.896 "send_doorbell_updates": 121, 00:09:24.896 "total_recv_wrs": 4228, 00:09:24.896 "recv_doorbell_updates": 122 00:09:24.896 }, 00:09:24.896 { 00:09:24.896 "name": "mlx5_1", 00:09:24.896 "polls": 2902627, 00:09:24.896 "idle_polls": 2902627, 00:09:24.896 "completions": 0, 00:09:24.896 "requests": 0, 00:09:24.896 "request_latency": 0, 00:09:24.896 "pending_free_request": 0, 00:09:24.896 "pending_rdma_read": 0, 00:09:24.896 "pending_rdma_write": 0, 00:09:24.896 "pending_rdma_send": 0, 00:09:24.896 "total_send_wrs": 0, 00:09:24.896 "send_doorbell_updates": 0, 00:09:24.896 "total_recv_wrs": 4096, 00:09:24.896 "recv_doorbell_updates": 1 00:09:24.896 } 00:09:24.896 ] 00:09:24.896 } 00:09:24.896 ] 00:09:24.896 } 00:09:24.896 ] 00:09:24.896 }' 00:09:24.896 04:00:39 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:24.896 04:00:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:24.896 04:00:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:24.896 04:00:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.896 04:00:39 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:24.896 04:00:39 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:24.896 04:00:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:24.896 04:00:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:24.896 04:00:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.896 04:00:39 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:09:24.896 04:00:39 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:09:24.896 04:00:39 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:09:24.896 04:00:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:09:24.896 04:00:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:09:24.896 04:00:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.896 04:00:39 -- target/rpc.sh@117 -- # (( 1300 > 0 )) 00:09:24.896 04:00:39 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:09:24.896 04:00:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:09:24.896 04:00:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:09:24.896 04:00:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.896 04:00:39 -- target/rpc.sh@118 -- # (( 133789852 > 0 )) 00:09:24.896 04:00:39 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:24.896 04:00:39 -- target/rpc.sh@123 -- # nvmftestfini 00:09:24.896 04:00:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:24.896 04:00:39 -- nvmf/common.sh@117 -- # sync 00:09:24.896 04:00:39 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:24.896 04:00:39 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:24.896 04:00:39 -- nvmf/common.sh@120 -- # set +e 00:09:24.896 04:00:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.896 04:00:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:24.896 rmmod nvme_rdma 00:09:24.896 rmmod nvme_fabrics 00:09:24.896 04:00:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.896 04:00:39 -- nvmf/common.sh@124 -- # set -e 00:09:24.896 04:00:39 -- nvmf/common.sh@125 -- # return 0 00:09:24.896 04:00:39 -- nvmf/common.sh@478 -- # '[' -n 186990 ']' 00:09:24.896 04:00:39 -- nvmf/common.sh@479 -- # killprocess 186990 00:09:24.896 04:00:39 -- common/autotest_common.sh@936 -- # '[' -z 186990 ']' 00:09:24.896 04:00:39 -- common/autotest_common.sh@940 -- # kill -0 186990 00:09:24.896 04:00:39 -- common/autotest_common.sh@941 -- # uname 00:09:24.896 04:00:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:24.896 04:00:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 186990 00:09:25.155 04:00:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:25.155 04:00:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:25.155 04:00:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 186990' 00:09:25.155 killing process with pid 186990 00:09:25.155 04:00:39 -- common/autotest_common.sh@955 -- # kill 186990 00:09:25.155 04:00:39 -- common/autotest_common.sh@960 -- # wait 186990 00:09:25.415 04:00:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:25.415 04:00:39 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:25.415 00:09:25.415 real 0m36.086s 00:09:25.415 user 2m2.446s 00:09:25.415 sys 0m5.591s 00:09:25.415 04:00:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:25.415 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:25.415 ************************************ 00:09:25.415 END TEST nvmf_rpc 00:09:25.415 ************************************ 00:09:25.415 04:00:39 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:25.415 04:00:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:25.415 04:00:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.415 04:00:39 -- common/autotest_common.sh@10 -- # set +x 00:09:25.415 ************************************ 00:09:25.415 START TEST nvmf_invalid 00:09:25.415 ************************************ 00:09:25.415 04:00:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:25.676 * Looking for test storage... 00:09:25.676 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:25.676 04:00:39 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.676 04:00:39 -- nvmf/common.sh@7 -- # uname -s 00:09:25.676 04:00:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.676 04:00:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.676 04:00:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.676 04:00:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.676 04:00:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.676 04:00:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.676 04:00:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.676 04:00:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.676 04:00:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.676 04:00:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.676 04:00:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:25.676 04:00:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:25.676 04:00:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.676 04:00:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.676 04:00:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.676 04:00:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.676 04:00:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:25.676 04:00:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.676 04:00:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.676 04:00:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.676 04:00:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.676 04:00:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.676 04:00:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.677 04:00:39 -- paths/export.sh@5 -- # export PATH 00:09:25.677 04:00:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.677 04:00:39 -- nvmf/common.sh@47 -- # : 0 00:09:25.677 04:00:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.677 04:00:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.677 04:00:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.677 04:00:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.677 04:00:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.677 04:00:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.677 04:00:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.677 04:00:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.677 04:00:39 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:25.677 04:00:39 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:25.677 04:00:39 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:25.677 04:00:39 -- target/invalid.sh@14 -- # target=foobar 00:09:25.677 04:00:39 -- target/invalid.sh@16 -- # RANDOM=0 00:09:25.677 04:00:39 -- target/invalid.sh@34 -- # nvmftestinit 00:09:25.677 04:00:39 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:25.677 04:00:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.677 04:00:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:25.677 04:00:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:25.677 04:00:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:25.677 04:00:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.677 04:00:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.677 04:00:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.677 04:00:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:25.677 04:00:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:25.677 04:00:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:25.677 04:00:40 -- common/autotest_common.sh@10 -- # set +x 00:09:30.965 04:00:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:30.965 04:00:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.965 04:00:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.965 04:00:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.965 04:00:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.965 04:00:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.965 04:00:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.965 04:00:44 -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.965 04:00:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.965 04:00:44 -- nvmf/common.sh@296 -- # e810=() 00:09:30.965 04:00:44 -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.965 04:00:44 -- nvmf/common.sh@297 -- # x722=() 00:09:30.965 04:00:44 -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.965 04:00:44 -- nvmf/common.sh@298 -- # mlx=() 00:09:30.965 04:00:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.965 04:00:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.965 04:00:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.965 04:00:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.965 04:00:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.965 04:00:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.965 04:00:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.965 04:00:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.965 04:00:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.965 04:00:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.965 04:00:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.965 04:00:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.965 04:00:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.965 04:00:44 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:30.965 04:00:44 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:30.965 04:00:44 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:30.965 04:00:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.965 04:00:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:30.965 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:30.965 04:00:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:30.965 04:00:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:30.965 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:30.965 04:00:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:30.965 04:00:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.965 04:00:44 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.965 04:00:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:30.965 04:00:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.965 04:00:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:30.965 Found net devices under 0000:18:00.0: mlx_0_0 00:09:30.965 04:00:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.965 04:00:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.965 04:00:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:30.965 04:00:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.965 04:00:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:30.965 Found net devices under 0000:18:00.1: mlx_0_1 00:09:30.965 04:00:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.965 04:00:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:30.965 04:00:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:30.965 04:00:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:30.965 04:00:44 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:30.965 04:00:44 -- nvmf/common.sh@58 -- # uname 00:09:30.965 04:00:44 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:30.965 04:00:44 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:30.965 04:00:44 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:30.965 04:00:44 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:30.965 04:00:44 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:30.965 04:00:44 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:30.965 04:00:44 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:30.965 04:00:44 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:30.965 04:00:44 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:30.965 04:00:44 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:30.965 04:00:44 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:30.965 04:00:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:30.965 04:00:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:30.965 04:00:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:30.965 04:00:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:30.965 04:00:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:30.965 04:00:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:30.965 04:00:44 -- nvmf/common.sh@105 -- # continue 2 00:09:30.965 04:00:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:30.965 04:00:44 -- nvmf/common.sh@105 -- # continue 2 00:09:30.965 04:00:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:30.965 04:00:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:30.965 04:00:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:30.965 04:00:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:30.965 04:00:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:30.965 04:00:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:30.965 04:00:44 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:30.965 04:00:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:30.965 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:30.965 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:30.965 altname enp24s0f0np0 00:09:30.965 altname ens785f0np0 00:09:30.965 inet 192.168.100.8/24 scope global mlx_0_0 00:09:30.965 valid_lft forever preferred_lft forever 00:09:30.965 04:00:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:30.965 04:00:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:30.965 04:00:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:30.965 04:00:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:30.965 04:00:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:30.965 04:00:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:30.965 04:00:44 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:30.965 04:00:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:30.965 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:30.965 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:30.965 altname enp24s0f1np1 00:09:30.965 altname ens785f1np1 00:09:30.965 inet 192.168.100.9/24 scope global mlx_0_1 00:09:30.965 valid_lft forever preferred_lft forever 00:09:30.965 04:00:44 -- nvmf/common.sh@411 -- # return 0 00:09:30.965 04:00:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:30.965 04:00:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:30.965 04:00:44 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:30.965 04:00:44 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:30.965 04:00:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:30.965 04:00:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:30.965 04:00:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:30.965 04:00:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:30.965 04:00:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:30.965 04:00:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:30.965 04:00:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:30.965 04:00:44 -- nvmf/common.sh@105 -- # continue 2 00:09:30.965 04:00:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.965 04:00:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:30.966 04:00:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.966 04:00:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:30.966 04:00:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:30.966 04:00:44 -- nvmf/common.sh@105 -- # continue 2 00:09:30.966 04:00:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:30.966 04:00:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:30.966 04:00:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:30.966 04:00:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:30.966 04:00:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:30.966 04:00:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:30.966 04:00:45 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:30.966 04:00:45 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:30.966 04:00:45 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:30.966 04:00:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:30.966 04:00:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:30.966 04:00:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:30.966 04:00:45 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:30.966 192.168.100.9' 00:09:30.966 04:00:45 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:30.966 192.168.100.9' 00:09:30.966 04:00:45 -- nvmf/common.sh@446 -- # head -n 1 00:09:30.966 04:00:45 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:30.966 04:00:45 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:30.966 192.168.100.9' 00:09:30.966 04:00:45 -- nvmf/common.sh@447 -- # tail -n +2 00:09:30.966 04:00:45 -- nvmf/common.sh@447 -- # head -n 1 00:09:30.966 04:00:45 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:30.966 04:00:45 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:30.966 04:00:45 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:30.966 04:00:45 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:30.966 04:00:45 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:30.966 04:00:45 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:30.966 04:00:45 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:30.966 04:00:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:30.966 04:00:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:30.966 04:00:45 -- common/autotest_common.sh@10 -- # set +x 00:09:30.966 04:00:45 -- nvmf/common.sh@470 -- # nvmfpid=195844 00:09:30.966 04:00:45 -- nvmf/common.sh@471 -- # waitforlisten 195844 00:09:30.966 04:00:45 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.966 04:00:45 -- common/autotest_common.sh@817 -- # '[' -z 195844 ']' 00:09:30.966 04:00:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.966 04:00:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:30.966 04:00:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.966 04:00:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:30.966 04:00:45 -- common/autotest_common.sh@10 -- # set +x 00:09:30.966 [2024-04-19 04:00:45.099920] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:09:30.966 [2024-04-19 04:00:45.099964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.966 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.966 [2024-04-19 04:00:45.150936] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.966 [2024-04-19 04:00:45.223519] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.966 [2024-04-19 04:00:45.223554] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.966 [2024-04-19 04:00:45.223561] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.966 [2024-04-19 04:00:45.223566] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.966 [2024-04-19 04:00:45.223571] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.966 [2024-04-19 04:00:45.223604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.966 [2024-04-19 04:00:45.223686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.966 [2024-04-19 04:00:45.223773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.966 [2024-04-19 04:00:45.223775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.538 04:00:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:31.538 04:00:45 -- common/autotest_common.sh@850 -- # return 0 00:09:31.538 04:00:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:31.538 04:00:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:31.538 04:00:45 -- common/autotest_common.sh@10 -- # set +x 00:09:31.538 04:00:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.538 04:00:45 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:31.538 04:00:45 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23659 00:09:31.538 [2024-04-19 04:00:46.051468] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:31.798 04:00:46 -- target/invalid.sh@40 -- # out='request: 00:09:31.798 { 00:09:31.798 "nqn": "nqn.2016-06.io.spdk:cnode23659", 00:09:31.798 "tgt_name": "foobar", 00:09:31.798 "method": "nvmf_create_subsystem", 00:09:31.798 "req_id": 1 00:09:31.798 } 00:09:31.798 Got JSON-RPC error response 00:09:31.798 response: 00:09:31.798 { 00:09:31.798 "code": -32603, 00:09:31.798 "message": "Unable to find target foobar" 00:09:31.798 }' 00:09:31.798 04:00:46 -- target/invalid.sh@41 -- # [[ request: 00:09:31.798 { 00:09:31.798 "nqn": "nqn.2016-06.io.spdk:cnode23659", 00:09:31.798 "tgt_name": "foobar", 00:09:31.798 "method": "nvmf_create_subsystem", 00:09:31.798 "req_id": 1 00:09:31.798 } 00:09:31.798 Got JSON-RPC error response 00:09:31.798 response: 00:09:31.798 { 00:09:31.798 "code": -32603, 00:09:31.798 "message": "Unable to find target foobar" 00:09:31.798 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:31.798 04:00:46 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:31.798 04:00:46 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1983 00:09:31.798 [2024-04-19 04:00:46.220018] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1983: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:31.798 04:00:46 -- target/invalid.sh@45 -- # out='request: 00:09:31.798 { 00:09:31.798 "nqn": "nqn.2016-06.io.spdk:cnode1983", 00:09:31.798 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:31.798 "method": "nvmf_create_subsystem", 00:09:31.798 "req_id": 1 00:09:31.798 } 00:09:31.798 Got JSON-RPC error response 00:09:31.798 response: 00:09:31.798 { 00:09:31.798 "code": -32602, 00:09:31.798 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:31.798 }' 00:09:31.798 04:00:46 -- target/invalid.sh@46 -- # [[ request: 00:09:31.798 { 00:09:31.798 "nqn": "nqn.2016-06.io.spdk:cnode1983", 00:09:31.798 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:31.798 "method": "nvmf_create_subsystem", 00:09:31.798 "req_id": 1 00:09:31.798 } 00:09:31.798 Got JSON-RPC error response 00:09:31.798 response: 00:09:31.798 { 00:09:31.798 "code": -32602, 00:09:31.798 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:31.798 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:31.798 04:00:46 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:31.798 04:00:46 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13179 00:09:32.059 [2024-04-19 04:00:46.396562] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13179: invalid model number 'SPDK_Controller' 00:09:32.059 04:00:46 -- target/invalid.sh@50 -- # out='request: 00:09:32.059 { 00:09:32.059 "nqn": "nqn.2016-06.io.spdk:cnode13179", 00:09:32.059 "model_number": "SPDK_Controller\u001f", 00:09:32.059 "method": "nvmf_create_subsystem", 00:09:32.059 "req_id": 1 00:09:32.059 } 00:09:32.059 Got JSON-RPC error response 00:09:32.059 response: 00:09:32.059 { 00:09:32.059 "code": -32602, 00:09:32.059 "message": "Invalid MN SPDK_Controller\u001f" 00:09:32.059 }' 00:09:32.059 04:00:46 -- target/invalid.sh@51 -- # [[ request: 00:09:32.059 { 00:09:32.059 "nqn": "nqn.2016-06.io.spdk:cnode13179", 00:09:32.059 "model_number": "SPDK_Controller\u001f", 00:09:32.059 "method": "nvmf_create_subsystem", 00:09:32.059 "req_id": 1 00:09:32.059 } 00:09:32.059 Got JSON-RPC error response 00:09:32.059 response: 00:09:32.059 { 00:09:32.059 "code": -32602, 00:09:32.059 "message": "Invalid MN SPDK_Controller\u001f" 00:09:32.059 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:32.059 04:00:46 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:32.059 04:00:46 -- target/invalid.sh@19 -- # local length=21 ll 00:09:32.059 04:00:46 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:32.059 04:00:46 -- target/invalid.sh@21 -- # local chars 00:09:32.059 04:00:46 -- target/invalid.sh@22 -- # local string 00:09:32.059 04:00:46 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 114 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=r 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 89 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=Y 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 96 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+='`' 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 109 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=m 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 88 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=X 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 114 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=r 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 76 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=L 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 88 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=X 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 76 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=L 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 89 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=Y 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 67 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=C 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 104 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=h 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 68 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=D 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 98 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=b 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 32 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=' ' 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 81 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=Q 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 126 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+='~' 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 79 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=O 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 43 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=+ 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 124 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+='|' 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # printf %x 119 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:32.060 04:00:46 -- target/invalid.sh@25 -- # string+=w 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.060 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.061 04:00:46 -- target/invalid.sh@28 -- # [[ r == \- ]] 00:09:32.061 04:00:46 -- target/invalid.sh@31 -- # echo 'rY`mXrLXLYChDb Q~O+|w' 00:09:32.061 04:00:46 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'rY`mXrLXLYChDb Q~O+|w' nqn.2016-06.io.spdk:cnode2698 00:09:32.321 [2024-04-19 04:00:46.705532] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2698: invalid serial number 'rY`mXrLXLYChDb Q~O+|w' 00:09:32.321 04:00:46 -- target/invalid.sh@54 -- # out='request: 00:09:32.321 { 00:09:32.321 "nqn": "nqn.2016-06.io.spdk:cnode2698", 00:09:32.321 "serial_number": "rY`mXrLXLYChDb Q~O+|w", 00:09:32.321 "method": "nvmf_create_subsystem", 00:09:32.321 "req_id": 1 00:09:32.321 } 00:09:32.321 Got JSON-RPC error response 00:09:32.321 response: 00:09:32.321 { 00:09:32.321 "code": -32602, 00:09:32.321 "message": "Invalid SN rY`mXrLXLYChDb Q~O+|w" 00:09:32.321 }' 00:09:32.321 04:00:46 -- target/invalid.sh@55 -- # [[ request: 00:09:32.321 { 00:09:32.321 "nqn": "nqn.2016-06.io.spdk:cnode2698", 00:09:32.321 "serial_number": "rY`mXrLXLYChDb Q~O+|w", 00:09:32.321 "method": "nvmf_create_subsystem", 00:09:32.321 "req_id": 1 00:09:32.321 } 00:09:32.321 Got JSON-RPC error response 00:09:32.321 response: 00:09:32.321 { 00:09:32.321 "code": -32602, 00:09:32.321 "message": "Invalid SN rY`mXrLXLYChDb Q~O+|w" 00:09:32.321 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:32.321 04:00:46 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:32.321 04:00:46 -- target/invalid.sh@19 -- # local length=41 ll 00:09:32.321 04:00:46 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:32.321 04:00:46 -- target/invalid.sh@21 -- # local chars 00:09:32.321 04:00:46 -- target/invalid.sh@22 -- # local string 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # printf %x 93 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # string+=']' 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # printf %x 121 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # string+=y 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # printf %x 66 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # string+=B 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # printf %x 118 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # string+=v 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # printf %x 83 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # string+=S 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # printf %x 35 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # string+='#' 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # printf %x 37 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # string+=% 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # printf %x 49 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:32.321 04:00:46 -- target/invalid.sh@25 -- # string+=1 00:09:32.321 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # printf %x 67 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # string+=C 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # printf %x 95 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # string+=_ 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # printf %x 94 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # string+='^' 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # printf %x 48 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # string+=0 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # printf %x 46 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # string+=. 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # printf %x 103 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # string+=g 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # printf %x 103 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # string+=g 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # printf %x 116 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # string+=t 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # printf %x 95 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # string+=_ 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # printf %x 80 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # string+=P 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # printf %x 99 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:32.322 04:00:46 -- target/invalid.sh@25 -- # string+=c 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # printf %x 57 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # string+=9 00:09:32.582 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.582 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # printf %x 95 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # string+=_ 00:09:32.582 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.582 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # printf %x 37 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # string+=% 00:09:32.582 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.582 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # printf %x 56 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # string+=8 00:09:32.582 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.582 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # printf %x 83 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # string+=S 00:09:32.582 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.582 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.582 04:00:46 -- target/invalid.sh@25 -- # printf %x 124 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+='|' 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 55 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=7 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 84 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=T 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 83 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=S 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 97 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=a 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 42 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+='*' 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 78 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=N 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 96 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+='`' 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 53 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=5 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 121 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=y 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 68 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=D 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 76 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=L 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 36 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+='$' 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 117 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=u 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 80 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=P 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 39 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=\' 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # printf %x 37 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:32.583 04:00:46 -- target/invalid.sh@25 -- # string+=% 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.583 04:00:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.583 04:00:46 -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:09:32.583 04:00:46 -- target/invalid.sh@31 -- # echo ']yBvS#%1C_^0.ggt_Pc9_%8S|7TSa*N`5yDL$uP'\''%' 00:09:32.583 04:00:46 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ']yBvS#%1C_^0.ggt_Pc9_%8S|7TSa*N`5yDL$uP'\''%' nqn.2016-06.io.spdk:cnode17022 00:09:32.846 [2024-04-19 04:00:47.114839] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17022: invalid model number ']yBvS#%1C_^0.ggt_Pc9_%8S|7TSa*N`5yDL$uP'%' 00:09:32.846 04:00:47 -- target/invalid.sh@58 -- # out='request: 00:09:32.846 { 00:09:32.846 "nqn": "nqn.2016-06.io.spdk:cnode17022", 00:09:32.846 "model_number": "]yBvS#%1C_^0.ggt_Pc9_%8S|7TSa*N`5yDL$uP'\''%", 00:09:32.846 "method": "nvmf_create_subsystem", 00:09:32.846 "req_id": 1 00:09:32.846 } 00:09:32.846 Got JSON-RPC error response 00:09:32.846 response: 00:09:32.846 { 00:09:32.846 "code": -32602, 00:09:32.846 "message": "Invalid MN ]yBvS#%1C_^0.ggt_Pc9_%8S|7TSa*N`5yDL$uP'\''%" 00:09:32.846 }' 00:09:32.846 04:00:47 -- target/invalid.sh@59 -- # [[ request: 00:09:32.846 { 00:09:32.846 "nqn": "nqn.2016-06.io.spdk:cnode17022", 00:09:32.846 "model_number": "]yBvS#%1C_^0.ggt_Pc9_%8S|7TSa*N`5yDL$uP'%", 00:09:32.846 "method": "nvmf_create_subsystem", 00:09:32.846 "req_id": 1 00:09:32.846 } 00:09:32.846 Got JSON-RPC error response 00:09:32.846 response: 00:09:32.846 { 00:09:32.846 "code": -32602, 00:09:32.846 "message": "Invalid MN ]yBvS#%1C_^0.ggt_Pc9_%8S|7TSa*N`5yDL$uP'%" 00:09:32.846 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:32.846 04:00:47 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:09:32.846 [2024-04-19 04:00:47.302548] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x548d80/0x54d270) succeed. 00:09:32.846 [2024-04-19 04:00:47.311453] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x54a370/0x58e900) succeed. 00:09:33.107 04:00:47 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:33.107 04:00:47 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:09:33.107 04:00:47 -- target/invalid.sh@67 -- # head -n 1 00:09:33.107 04:00:47 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:09:33.107 192.168.100.9' 00:09:33.107 04:00:47 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:09:33.107 04:00:47 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:09:33.368 [2024-04-19 04:00:47.761526] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:33.368 04:00:47 -- target/invalid.sh@69 -- # out='request: 00:09:33.368 { 00:09:33.368 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:33.368 "listen_address": { 00:09:33.368 "trtype": "rdma", 00:09:33.368 "traddr": "192.168.100.8", 00:09:33.368 "trsvcid": "4421" 00:09:33.368 }, 00:09:33.368 "method": "nvmf_subsystem_remove_listener", 00:09:33.368 "req_id": 1 00:09:33.368 } 00:09:33.368 Got JSON-RPC error response 00:09:33.368 response: 00:09:33.368 { 00:09:33.368 "code": -32602, 00:09:33.368 "message": "Invalid parameters" 00:09:33.368 }' 00:09:33.368 04:00:47 -- target/invalid.sh@70 -- # [[ request: 00:09:33.368 { 00:09:33.368 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:33.368 "listen_address": { 00:09:33.368 "trtype": "rdma", 00:09:33.368 "traddr": "192.168.100.8", 00:09:33.368 "trsvcid": "4421" 00:09:33.368 }, 00:09:33.368 "method": "nvmf_subsystem_remove_listener", 00:09:33.368 "req_id": 1 00:09:33.368 } 00:09:33.368 Got JSON-RPC error response 00:09:33.368 response: 00:09:33.368 { 00:09:33.368 "code": -32602, 00:09:33.368 "message": "Invalid parameters" 00:09:33.368 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:33.368 04:00:47 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29352 -i 0 00:09:33.628 [2024-04-19 04:00:47.938079] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29352: invalid cntlid range [0-65519] 00:09:33.628 04:00:47 -- target/invalid.sh@73 -- # out='request: 00:09:33.628 { 00:09:33.628 "nqn": "nqn.2016-06.io.spdk:cnode29352", 00:09:33.628 "min_cntlid": 0, 00:09:33.628 "method": "nvmf_create_subsystem", 00:09:33.628 "req_id": 1 00:09:33.628 } 00:09:33.628 Got JSON-RPC error response 00:09:33.628 response: 00:09:33.628 { 00:09:33.628 "code": -32602, 00:09:33.628 "message": "Invalid cntlid range [0-65519]" 00:09:33.628 }' 00:09:33.628 04:00:47 -- target/invalid.sh@74 -- # [[ request: 00:09:33.628 { 00:09:33.628 "nqn": "nqn.2016-06.io.spdk:cnode29352", 00:09:33.628 "min_cntlid": 0, 00:09:33.628 "method": "nvmf_create_subsystem", 00:09:33.628 "req_id": 1 00:09:33.628 } 00:09:33.628 Got JSON-RPC error response 00:09:33.628 response: 00:09:33.628 { 00:09:33.628 "code": -32602, 00:09:33.628 "message": "Invalid cntlid range [0-65519]" 00:09:33.628 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.628 04:00:47 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24036 -i 65520 00:09:33.629 [2024-04-19 04:00:48.114685] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24036: invalid cntlid range [65520-65519] 00:09:33.629 04:00:48 -- target/invalid.sh@75 -- # out='request: 00:09:33.629 { 00:09:33.629 "nqn": "nqn.2016-06.io.spdk:cnode24036", 00:09:33.629 "min_cntlid": 65520, 00:09:33.629 "method": "nvmf_create_subsystem", 00:09:33.629 "req_id": 1 00:09:33.629 } 00:09:33.629 Got JSON-RPC error response 00:09:33.629 response: 00:09:33.629 { 00:09:33.629 "code": -32602, 00:09:33.629 "message": "Invalid cntlid range [65520-65519]" 00:09:33.629 }' 00:09:33.629 04:00:48 -- target/invalid.sh@76 -- # [[ request: 00:09:33.629 { 00:09:33.629 "nqn": "nqn.2016-06.io.spdk:cnode24036", 00:09:33.629 "min_cntlid": 65520, 00:09:33.629 "method": "nvmf_create_subsystem", 00:09:33.629 "req_id": 1 00:09:33.629 } 00:09:33.629 Got JSON-RPC error response 00:09:33.629 response: 00:09:33.629 { 00:09:33.629 "code": -32602, 00:09:33.629 "message": "Invalid cntlid range [65520-65519]" 00:09:33.629 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.629 04:00:48 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16622 -I 0 00:09:33.888 [2024-04-19 04:00:48.279258] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16622: invalid cntlid range [1-0] 00:09:33.889 04:00:48 -- target/invalid.sh@77 -- # out='request: 00:09:33.889 { 00:09:33.889 "nqn": "nqn.2016-06.io.spdk:cnode16622", 00:09:33.889 "max_cntlid": 0, 00:09:33.889 "method": "nvmf_create_subsystem", 00:09:33.889 "req_id": 1 00:09:33.889 } 00:09:33.889 Got JSON-RPC error response 00:09:33.889 response: 00:09:33.889 { 00:09:33.889 "code": -32602, 00:09:33.889 "message": "Invalid cntlid range [1-0]" 00:09:33.889 }' 00:09:33.889 04:00:48 -- target/invalid.sh@78 -- # [[ request: 00:09:33.889 { 00:09:33.889 "nqn": "nqn.2016-06.io.spdk:cnode16622", 00:09:33.889 "max_cntlid": 0, 00:09:33.889 "method": "nvmf_create_subsystem", 00:09:33.889 "req_id": 1 00:09:33.889 } 00:09:33.889 Got JSON-RPC error response 00:09:33.889 response: 00:09:33.889 { 00:09:33.889 "code": -32602, 00:09:33.889 "message": "Invalid cntlid range [1-0]" 00:09:33.889 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.889 04:00:48 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22826 -I 65520 00:09:34.149 [2024-04-19 04:00:48.451875] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22826: invalid cntlid range [1-65520] 00:09:34.149 04:00:48 -- target/invalid.sh@79 -- # out='request: 00:09:34.149 { 00:09:34.149 "nqn": "nqn.2016-06.io.spdk:cnode22826", 00:09:34.149 "max_cntlid": 65520, 00:09:34.149 "method": "nvmf_create_subsystem", 00:09:34.149 "req_id": 1 00:09:34.149 } 00:09:34.149 Got JSON-RPC error response 00:09:34.149 response: 00:09:34.149 { 00:09:34.149 "code": -32602, 00:09:34.149 "message": "Invalid cntlid range [1-65520]" 00:09:34.149 }' 00:09:34.149 04:00:48 -- target/invalid.sh@80 -- # [[ request: 00:09:34.149 { 00:09:34.149 "nqn": "nqn.2016-06.io.spdk:cnode22826", 00:09:34.149 "max_cntlid": 65520, 00:09:34.149 "method": "nvmf_create_subsystem", 00:09:34.149 "req_id": 1 00:09:34.149 } 00:09:34.149 Got JSON-RPC error response 00:09:34.149 response: 00:09:34.149 { 00:09:34.149 "code": -32602, 00:09:34.149 "message": "Invalid cntlid range [1-65520]" 00:09:34.149 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:34.149 04:00:48 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode911 -i 6 -I 5 00:09:34.149 [2024-04-19 04:00:48.612422] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode911: invalid cntlid range [6-5] 00:09:34.149 04:00:48 -- target/invalid.sh@83 -- # out='request: 00:09:34.149 { 00:09:34.149 "nqn": "nqn.2016-06.io.spdk:cnode911", 00:09:34.149 "min_cntlid": 6, 00:09:34.149 "max_cntlid": 5, 00:09:34.149 "method": "nvmf_create_subsystem", 00:09:34.149 "req_id": 1 00:09:34.149 } 00:09:34.149 Got JSON-RPC error response 00:09:34.149 response: 00:09:34.149 { 00:09:34.149 "code": -32602, 00:09:34.149 "message": "Invalid cntlid range [6-5]" 00:09:34.149 }' 00:09:34.149 04:00:48 -- target/invalid.sh@84 -- # [[ request: 00:09:34.149 { 00:09:34.149 "nqn": "nqn.2016-06.io.spdk:cnode911", 00:09:34.149 "min_cntlid": 6, 00:09:34.149 "max_cntlid": 5, 00:09:34.149 "method": "nvmf_create_subsystem", 00:09:34.149 "req_id": 1 00:09:34.149 } 00:09:34.149 Got JSON-RPC error response 00:09:34.149 response: 00:09:34.149 { 00:09:34.149 "code": -32602, 00:09:34.149 "message": "Invalid cntlid range [6-5]" 00:09:34.149 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:34.149 04:00:48 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:34.409 04:00:48 -- target/invalid.sh@87 -- # out='request: 00:09:34.409 { 00:09:34.409 "name": "foobar", 00:09:34.409 "method": "nvmf_delete_target", 00:09:34.409 "req_id": 1 00:09:34.409 } 00:09:34.409 Got JSON-RPC error response 00:09:34.409 response: 00:09:34.409 { 00:09:34.409 "code": -32602, 00:09:34.409 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:34.409 }' 00:09:34.409 04:00:48 -- target/invalid.sh@88 -- # [[ request: 00:09:34.409 { 00:09:34.409 "name": "foobar", 00:09:34.409 "method": "nvmf_delete_target", 00:09:34.409 "req_id": 1 00:09:34.409 } 00:09:34.409 Got JSON-RPC error response 00:09:34.409 response: 00:09:34.409 { 00:09:34.409 "code": -32602, 00:09:34.409 "message": "The specified target doesn't exist, cannot delete it." 00:09:34.409 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:34.409 04:00:48 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:34.409 04:00:48 -- target/invalid.sh@91 -- # nvmftestfini 00:09:34.409 04:00:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:34.409 04:00:48 -- nvmf/common.sh@117 -- # sync 00:09:34.409 04:00:48 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:34.409 04:00:48 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:34.409 04:00:48 -- nvmf/common.sh@120 -- # set +e 00:09:34.409 04:00:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.410 04:00:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:34.410 rmmod nvme_rdma 00:09:34.410 rmmod nvme_fabrics 00:09:34.410 04:00:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.410 04:00:48 -- nvmf/common.sh@124 -- # set -e 00:09:34.410 04:00:48 -- nvmf/common.sh@125 -- # return 0 00:09:34.410 04:00:48 -- nvmf/common.sh@478 -- # '[' -n 195844 ']' 00:09:34.410 04:00:48 -- nvmf/common.sh@479 -- # killprocess 195844 00:09:34.410 04:00:48 -- common/autotest_common.sh@936 -- # '[' -z 195844 ']' 00:09:34.410 04:00:48 -- common/autotest_common.sh@940 -- # kill -0 195844 00:09:34.410 04:00:48 -- common/autotest_common.sh@941 -- # uname 00:09:34.410 04:00:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:34.410 04:00:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 195844 00:09:34.410 04:00:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:34.410 04:00:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:34.410 04:00:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 195844' 00:09:34.410 killing process with pid 195844 00:09:34.410 04:00:48 -- common/autotest_common.sh@955 -- # kill 195844 00:09:34.410 04:00:48 -- common/autotest_common.sh@960 -- # wait 195844 00:09:34.670 04:00:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:34.670 04:00:49 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:34.670 00:09:34.670 real 0m9.204s 00:09:34.670 user 0m18.723s 00:09:34.670 sys 0m4.705s 00:09:34.670 04:00:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:34.670 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:09:34.670 ************************************ 00:09:34.670 END TEST nvmf_invalid 00:09:34.670 ************************************ 00:09:34.670 04:00:49 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:34.670 04:00:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:34.670 04:00:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.670 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:09:34.930 ************************************ 00:09:34.930 START TEST nvmf_abort 00:09:34.930 ************************************ 00:09:34.930 04:00:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:34.930 * Looking for test storage... 00:09:34.930 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:34.930 04:00:49 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.930 04:00:49 -- nvmf/common.sh@7 -- # uname -s 00:09:34.930 04:00:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.930 04:00:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.930 04:00:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.930 04:00:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.930 04:00:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.930 04:00:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.930 04:00:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.930 04:00:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.930 04:00:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.930 04:00:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.930 04:00:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:34.930 04:00:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:34.930 04:00:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.930 04:00:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.930 04:00:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.930 04:00:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.930 04:00:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:34.930 04:00:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.930 04:00:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.930 04:00:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.930 04:00:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.930 04:00:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.930 04:00:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.930 04:00:49 -- paths/export.sh@5 -- # export PATH 00:09:34.930 04:00:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.930 04:00:49 -- nvmf/common.sh@47 -- # : 0 00:09:34.930 04:00:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.930 04:00:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.930 04:00:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.930 04:00:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.930 04:00:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.930 04:00:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.930 04:00:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.930 04:00:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.930 04:00:49 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.930 04:00:49 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:34.930 04:00:49 -- target/abort.sh@14 -- # nvmftestinit 00:09:34.930 04:00:49 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:34.930 04:00:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.930 04:00:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:34.930 04:00:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:34.930 04:00:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:34.931 04:00:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.931 04:00:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.931 04:00:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.931 04:00:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:34.931 04:00:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:34.931 04:00:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:34.931 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:09:40.208 04:00:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:40.208 04:00:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.208 04:00:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.208 04:00:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.208 04:00:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.208 04:00:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.208 04:00:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.208 04:00:54 -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.208 04:00:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.208 04:00:54 -- nvmf/common.sh@296 -- # e810=() 00:09:40.208 04:00:54 -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.208 04:00:54 -- nvmf/common.sh@297 -- # x722=() 00:09:40.208 04:00:54 -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.208 04:00:54 -- nvmf/common.sh@298 -- # mlx=() 00:09:40.208 04:00:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.208 04:00:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.208 04:00:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.208 04:00:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.208 04:00:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.208 04:00:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.208 04:00:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.208 04:00:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.208 04:00:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.208 04:00:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.208 04:00:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.208 04:00:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.208 04:00:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.208 04:00:54 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:40.208 04:00:54 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:40.208 04:00:54 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:40.208 04:00:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.208 04:00:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.208 04:00:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:40.208 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:40.208 04:00:54 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:40.208 04:00:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.208 04:00:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:40.208 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:40.208 04:00:54 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:40.208 04:00:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.208 04:00:54 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.208 04:00:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.208 04:00:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:40.208 04:00:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.208 04:00:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:40.208 Found net devices under 0000:18:00.0: mlx_0_0 00:09:40.208 04:00:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.208 04:00:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.208 04:00:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.208 04:00:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:40.208 04:00:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.208 04:00:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:40.208 Found net devices under 0000:18:00.1: mlx_0_1 00:09:40.208 04:00:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.208 04:00:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:40.208 04:00:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:40.208 04:00:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:40.208 04:00:54 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:40.208 04:00:54 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:40.208 04:00:54 -- nvmf/common.sh@58 -- # uname 00:09:40.208 04:00:54 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:40.208 04:00:54 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:40.208 04:00:54 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:40.208 04:00:54 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:40.208 04:00:54 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:40.208 04:00:54 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:40.208 04:00:54 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:40.208 04:00:54 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:40.208 04:00:54 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:40.208 04:00:54 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:40.208 04:00:54 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:40.208 04:00:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:40.208 04:00:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:40.208 04:00:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:40.209 04:00:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:40.209 04:00:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:40.209 04:00:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.209 04:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.209 04:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:40.209 04:00:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:40.209 04:00:54 -- nvmf/common.sh@105 -- # continue 2 00:09:40.209 04:00:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.209 04:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.209 04:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:40.209 04:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.209 04:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:40.209 04:00:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:40.209 04:00:54 -- nvmf/common.sh@105 -- # continue 2 00:09:40.209 04:00:54 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:40.209 04:00:54 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:40.209 04:00:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:40.209 04:00:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:40.209 04:00:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.209 04:00:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.209 04:00:54 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:40.209 04:00:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:40.209 04:00:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:40.209 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:40.209 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:40.209 altname enp24s0f0np0 00:09:40.209 altname ens785f0np0 00:09:40.209 inet 192.168.100.8/24 scope global mlx_0_0 00:09:40.209 valid_lft forever preferred_lft forever 00:09:40.209 04:00:54 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:40.209 04:00:54 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:40.209 04:00:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:40.209 04:00:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:40.209 04:00:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.209 04:00:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.209 04:00:54 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:40.209 04:00:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:40.209 04:00:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:40.209 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:40.209 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:40.209 altname enp24s0f1np1 00:09:40.209 altname ens785f1np1 00:09:40.209 inet 192.168.100.9/24 scope global mlx_0_1 00:09:40.209 valid_lft forever preferred_lft forever 00:09:40.209 04:00:54 -- nvmf/common.sh@411 -- # return 0 00:09:40.209 04:00:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:40.209 04:00:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:40.209 04:00:54 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:40.209 04:00:54 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:40.209 04:00:54 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:40.209 04:00:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:40.209 04:00:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:40.209 04:00:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:40.209 04:00:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:40.469 04:00:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:40.469 04:00:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.469 04:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.469 04:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:40.469 04:00:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:40.469 04:00:54 -- nvmf/common.sh@105 -- # continue 2 00:09:40.469 04:00:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.469 04:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.469 04:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:40.469 04:00:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.469 04:00:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:40.469 04:00:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:40.469 04:00:54 -- nvmf/common.sh@105 -- # continue 2 00:09:40.469 04:00:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:40.469 04:00:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:40.469 04:00:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:40.469 04:00:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:40.469 04:00:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.469 04:00:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.469 04:00:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:40.469 04:00:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:40.469 04:00:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:40.469 04:00:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:40.469 04:00:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.469 04:00:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.469 04:00:54 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:40.469 192.168.100.9' 00:09:40.469 04:00:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:40.469 192.168.100.9' 00:09:40.469 04:00:54 -- nvmf/common.sh@446 -- # head -n 1 00:09:40.469 04:00:54 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:40.469 04:00:54 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:40.469 192.168.100.9' 00:09:40.469 04:00:54 -- nvmf/common.sh@447 -- # tail -n +2 00:09:40.469 04:00:54 -- nvmf/common.sh@447 -- # head -n 1 00:09:40.469 04:00:54 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:40.469 04:00:54 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:40.470 04:00:54 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:40.470 04:00:54 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:40.470 04:00:54 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:40.470 04:00:54 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:40.470 04:00:54 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:40.470 04:00:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:40.470 04:00:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:40.470 04:00:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.470 04:00:54 -- nvmf/common.sh@470 -- # nvmfpid=199908 00:09:40.470 04:00:54 -- nvmf/common.sh@471 -- # waitforlisten 199908 00:09:40.470 04:00:54 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:40.470 04:00:54 -- common/autotest_common.sh@817 -- # '[' -z 199908 ']' 00:09:40.470 04:00:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.470 04:00:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:40.470 04:00:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.470 04:00:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:40.470 04:00:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.470 [2024-04-19 04:00:54.852077] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:09:40.470 [2024-04-19 04:00:54.852123] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.470 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.470 [2024-04-19 04:00:54.902598] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.470 [2024-04-19 04:00:54.970070] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.470 [2024-04-19 04:00:54.970102] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.470 [2024-04-19 04:00:54.970108] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.470 [2024-04-19 04:00:54.970116] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.470 [2024-04-19 04:00:54.970121] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.470 [2024-04-19 04:00:54.970226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.470 [2024-04-19 04:00:54.970311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.470 [2024-04-19 04:00:54.970312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.408 04:00:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:41.408 04:00:55 -- common/autotest_common.sh@850 -- # return 0 00:09:41.408 04:00:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:41.409 04:00:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:41.409 04:00:55 -- common/autotest_common.sh@10 -- # set +x 00:09:41.409 04:00:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.409 04:00:55 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:09:41.409 04:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.409 04:00:55 -- common/autotest_common.sh@10 -- # set +x 00:09:41.409 [2024-04-19 04:00:55.679666] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfd2ee0/0xfd73d0) succeed. 00:09:41.409 [2024-04-19 04:00:55.688664] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfd4430/0x1018a60) succeed. 00:09:41.409 04:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.409 04:00:55 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:41.409 04:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.409 04:00:55 -- common/autotest_common.sh@10 -- # set +x 00:09:41.409 Malloc0 00:09:41.409 04:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.409 04:00:55 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:41.409 04:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.409 04:00:55 -- common/autotest_common.sh@10 -- # set +x 00:09:41.409 Delay0 00:09:41.409 04:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.409 04:00:55 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:41.409 04:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.409 04:00:55 -- common/autotest_common.sh@10 -- # set +x 00:09:41.409 04:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.409 04:00:55 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:41.409 04:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.409 04:00:55 -- common/autotest_common.sh@10 -- # set +x 00:09:41.409 04:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.409 04:00:55 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:41.409 04:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.409 04:00:55 -- common/autotest_common.sh@10 -- # set +x 00:09:41.409 [2024-04-19 04:00:55.831472] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:41.409 04:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.409 04:00:55 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:41.409 04:00:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.409 04:00:55 -- common/autotest_common.sh@10 -- # set +x 00:09:41.409 04:00:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.409 04:00:55 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:41.409 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.409 [2024-04-19 04:00:55.918787] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:43.961 Initializing NVMe Controllers 00:09:43.961 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:43.961 controller IO queue size 128 less than required 00:09:43.961 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:43.961 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:43.961 Initialization complete. Launching workers. 00:09:43.961 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 57588 00:09:43.961 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 57649, failed to submit 62 00:09:43.961 success 57589, unsuccess 60, failed 0 00:09:43.961 04:00:58 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:43.961 04:00:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:43.961 04:00:58 -- common/autotest_common.sh@10 -- # set +x 00:09:43.961 04:00:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:43.961 04:00:58 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:43.961 04:00:58 -- target/abort.sh@38 -- # nvmftestfini 00:09:43.961 04:00:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:43.961 04:00:58 -- nvmf/common.sh@117 -- # sync 00:09:43.961 04:00:58 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:43.961 04:00:58 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:43.961 04:00:58 -- nvmf/common.sh@120 -- # set +e 00:09:43.961 04:00:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.962 04:00:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:43.962 rmmod nvme_rdma 00:09:43.962 rmmod nvme_fabrics 00:09:43.962 04:00:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.962 04:00:58 -- nvmf/common.sh@124 -- # set -e 00:09:43.962 04:00:58 -- nvmf/common.sh@125 -- # return 0 00:09:43.962 04:00:58 -- nvmf/common.sh@478 -- # '[' -n 199908 ']' 00:09:43.962 04:00:58 -- nvmf/common.sh@479 -- # killprocess 199908 00:09:43.962 04:00:58 -- common/autotest_common.sh@936 -- # '[' -z 199908 ']' 00:09:43.962 04:00:58 -- common/autotest_common.sh@940 -- # kill -0 199908 00:09:43.962 04:00:58 -- common/autotest_common.sh@941 -- # uname 00:09:43.962 04:00:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:43.962 04:00:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 199908 00:09:43.962 04:00:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:43.962 04:00:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:43.962 04:00:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 199908' 00:09:43.962 killing process with pid 199908 00:09:43.962 04:00:58 -- common/autotest_common.sh@955 -- # kill 199908 00:09:43.962 04:00:58 -- common/autotest_common.sh@960 -- # wait 199908 00:09:43.962 04:00:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:43.962 04:00:58 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:43.962 00:09:43.962 real 0m9.156s 00:09:43.962 user 0m13.996s 00:09:43.962 sys 0m4.422s 00:09:43.962 04:00:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:43.962 04:00:58 -- common/autotest_common.sh@10 -- # set +x 00:09:43.962 ************************************ 00:09:43.962 END TEST nvmf_abort 00:09:43.962 ************************************ 00:09:43.962 04:00:58 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:43.962 04:00:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:43.962 04:00:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.962 04:00:58 -- common/autotest_common.sh@10 -- # set +x 00:09:44.226 ************************************ 00:09:44.226 START TEST nvmf_ns_hotplug_stress 00:09:44.226 ************************************ 00:09:44.226 04:00:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:44.226 * Looking for test storage... 00:09:44.226 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:44.226 04:00:58 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.226 04:00:58 -- nvmf/common.sh@7 -- # uname -s 00:09:44.226 04:00:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.226 04:00:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.226 04:00:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.226 04:00:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.226 04:00:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.226 04:00:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.226 04:00:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.226 04:00:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.226 04:00:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.226 04:00:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.226 04:00:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:44.226 04:00:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:44.226 04:00:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.226 04:00:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.226 04:00:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.226 04:00:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.226 04:00:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:44.226 04:00:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.226 04:00:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.226 04:00:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.226 04:00:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.226 04:00:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.226 04:00:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.226 04:00:58 -- paths/export.sh@5 -- # export PATH 00:09:44.226 04:00:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.226 04:00:58 -- nvmf/common.sh@47 -- # : 0 00:09:44.226 04:00:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.226 04:00:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.226 04:00:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.226 04:00:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.226 04:00:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.226 04:00:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.226 04:00:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.226 04:00:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.226 04:00:58 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:44.226 04:00:58 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:09:44.226 04:00:58 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:44.226 04:00:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.226 04:00:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:44.226 04:00:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:44.226 04:00:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:44.226 04:00:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.226 04:00:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.226 04:00:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.226 04:00:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:44.226 04:00:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:44.226 04:00:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.226 04:00:58 -- common/autotest_common.sh@10 -- # set +x 00:09:49.507 04:01:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:49.507 04:01:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.507 04:01:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.507 04:01:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.507 04:01:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.507 04:01:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.507 04:01:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.507 04:01:03 -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.507 04:01:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.507 04:01:03 -- nvmf/common.sh@296 -- # e810=() 00:09:49.507 04:01:03 -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.507 04:01:03 -- nvmf/common.sh@297 -- # x722=() 00:09:49.507 04:01:03 -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.507 04:01:03 -- nvmf/common.sh@298 -- # mlx=() 00:09:49.507 04:01:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.507 04:01:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.507 04:01:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.507 04:01:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.507 04:01:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.507 04:01:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.507 04:01:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.507 04:01:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.507 04:01:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.507 04:01:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.507 04:01:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.507 04:01:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.507 04:01:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.507 04:01:03 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:49.507 04:01:03 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:49.507 04:01:03 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:49.507 04:01:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.507 04:01:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.507 04:01:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:49.507 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:49.507 04:01:03 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:49.507 04:01:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.507 04:01:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:49.507 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:49.507 04:01:03 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:49.507 04:01:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.507 04:01:03 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.507 04:01:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.507 04:01:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:49.507 04:01:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.507 04:01:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:49.507 Found net devices under 0000:18:00.0: mlx_0_0 00:09:49.507 04:01:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.507 04:01:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.507 04:01:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.507 04:01:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:49.507 04:01:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.507 04:01:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:49.507 Found net devices under 0000:18:00.1: mlx_0_1 00:09:49.507 04:01:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.507 04:01:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:49.507 04:01:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:49.507 04:01:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:49.507 04:01:03 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:49.507 04:01:03 -- nvmf/common.sh@58 -- # uname 00:09:49.507 04:01:03 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:49.507 04:01:03 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:49.507 04:01:03 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:49.507 04:01:03 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:49.507 04:01:03 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:49.507 04:01:03 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:49.507 04:01:03 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:49.507 04:01:03 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:49.507 04:01:03 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:49.507 04:01:03 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:49.507 04:01:03 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:49.507 04:01:03 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:49.507 04:01:03 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:49.507 04:01:03 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:49.507 04:01:03 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:49.507 04:01:03 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:49.507 04:01:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:49.507 04:01:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.507 04:01:03 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:49.507 04:01:03 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:49.507 04:01:03 -- nvmf/common.sh@105 -- # continue 2 00:09:49.508 04:01:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:49.508 04:01:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.508 04:01:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:49.508 04:01:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.508 04:01:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:49.508 04:01:03 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:49.508 04:01:03 -- nvmf/common.sh@105 -- # continue 2 00:09:49.508 04:01:03 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:49.508 04:01:03 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:49.508 04:01:03 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:49.508 04:01:03 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:49.508 04:01:03 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:49.508 04:01:03 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:49.508 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:49.508 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:49.508 altname enp24s0f0np0 00:09:49.508 altname ens785f0np0 00:09:49.508 inet 192.168.100.8/24 scope global mlx_0_0 00:09:49.508 valid_lft forever preferred_lft forever 00:09:49.508 04:01:03 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:49.508 04:01:03 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:49.508 04:01:03 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:49.508 04:01:03 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:49.508 04:01:03 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:49.508 04:01:03 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:49.508 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:49.508 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:49.508 altname enp24s0f1np1 00:09:49.508 altname ens785f1np1 00:09:49.508 inet 192.168.100.9/24 scope global mlx_0_1 00:09:49.508 valid_lft forever preferred_lft forever 00:09:49.508 04:01:03 -- nvmf/common.sh@411 -- # return 0 00:09:49.508 04:01:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:49.508 04:01:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:49.508 04:01:03 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:49.508 04:01:03 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:49.508 04:01:03 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:49.508 04:01:03 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:49.508 04:01:03 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:49.508 04:01:03 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:49.508 04:01:03 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:49.508 04:01:03 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:49.508 04:01:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:49.508 04:01:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.508 04:01:03 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:49.508 04:01:03 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:49.508 04:01:03 -- nvmf/common.sh@105 -- # continue 2 00:09:49.508 04:01:03 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:49.508 04:01:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.508 04:01:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:49.508 04:01:03 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:49.508 04:01:03 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:49.508 04:01:03 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:49.508 04:01:03 -- nvmf/common.sh@105 -- # continue 2 00:09:49.508 04:01:03 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:49.508 04:01:03 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:49.508 04:01:03 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:49.508 04:01:03 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:49.508 04:01:03 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:49.508 04:01:03 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:49.508 04:01:03 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:49.508 04:01:03 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:49.508 192.168.100.9' 00:09:49.508 04:01:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:49.508 192.168.100.9' 00:09:49.508 04:01:03 -- nvmf/common.sh@446 -- # head -n 1 00:09:49.508 04:01:03 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:49.508 04:01:03 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:49.508 192.168.100.9' 00:09:49.508 04:01:03 -- nvmf/common.sh@447 -- # tail -n +2 00:09:49.508 04:01:03 -- nvmf/common.sh@447 -- # head -n 1 00:09:49.508 04:01:03 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:49.508 04:01:03 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:49.508 04:01:03 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:49.508 04:01:03 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:49.508 04:01:03 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:49.508 04:01:03 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:49.508 04:01:03 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:09:49.508 04:01:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:49.508 04:01:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:49.508 04:01:03 -- common/autotest_common.sh@10 -- # set +x 00:09:49.508 04:01:03 -- nvmf/common.sh@470 -- # nvmfpid=203818 00:09:49.508 04:01:03 -- nvmf/common.sh@471 -- # waitforlisten 203818 00:09:49.508 04:01:03 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:49.508 04:01:03 -- common/autotest_common.sh@817 -- # '[' -z 203818 ']' 00:09:49.508 04:01:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.508 04:01:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:49.508 04:01:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.508 04:01:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:49.508 04:01:03 -- common/autotest_common.sh@10 -- # set +x 00:09:49.508 [2024-04-19 04:01:03.969548] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:09:49.508 [2024-04-19 04:01:03.969593] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.508 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.508 [2024-04-19 04:01:04.021833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.767 [2024-04-19 04:01:04.093747] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.767 [2024-04-19 04:01:04.093783] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.767 [2024-04-19 04:01:04.093789] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.767 [2024-04-19 04:01:04.093795] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.767 [2024-04-19 04:01:04.093799] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.768 [2024-04-19 04:01:04.093894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.768 [2024-04-19 04:01:04.093989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.768 [2024-04-19 04:01:04.093990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.338 04:01:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:50.338 04:01:04 -- common/autotest_common.sh@850 -- # return 0 00:09:50.338 04:01:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:50.338 04:01:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:50.338 04:01:04 -- common/autotest_common.sh@10 -- # set +x 00:09:50.338 04:01:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.338 04:01:04 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:09:50.338 04:01:04 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:50.598 [2024-04-19 04:01:04.931806] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1760ee0/0x17653d0) succeed. 00:09:50.598 [2024-04-19 04:01:04.940822] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1762430/0x17a6a60) succeed. 00:09:50.598 04:01:05 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:50.859 04:01:05 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:50.859 [2024-04-19 04:01:05.349805] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:50.859 04:01:05 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:51.120 04:01:05 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:51.379 Malloc0 00:09:51.379 04:01:05 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.379 Delay0 00:09:51.379 04:01:05 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.639 04:01:06 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:51.899 NULL1 00:09:51.899 04:01:06 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:51.899 04:01:06 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:51.899 04:01:06 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=204259 00:09:51.899 04:01:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:09:51.899 04:01:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.899 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.279 Read completed with error (sct=0, sc=11) 00:09:53.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.279 04:01:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.279 04:01:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:09:53.279 04:01:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:53.539 true 00:09:53.539 04:01:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:09:53.539 04:01:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.476 04:01:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.476 04:01:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:09:54.476 04:01:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:54.476 true 00:09:54.737 04:01:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:09:54.737 04:01:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.677 04:01:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.677 04:01:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:09:55.677 04:01:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:55.677 true 00:09:55.937 04:01:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:09:55.937 04:01:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.767 04:01:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.767 04:01:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:09:56.767 04:01:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:57.028 true 00:09:57.029 04:01:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:09:57.029 04:01:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.969 04:01:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.969 04:01:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:09:57.969 04:01:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:58.229 true 00:09:58.229 04:01:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:09:58.229 04:01:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.179 04:01:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.179 04:01:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:09:59.179 04:01:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:59.179 true 00:09:59.439 04:01:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:09:59.439 04:01:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.378 04:01:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.378 04:01:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:10:00.378 04:01:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:00.378 true 00:10:00.378 04:01:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:00.378 04:01:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.638 04:01:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.897 04:01:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:10:00.897 04:01:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:00.897 true 00:10:00.897 04:01:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:00.897 04:01:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.156 04:01:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.422 04:01:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:10:01.422 04:01:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:01.422 true 00:10:01.422 04:01:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:01.422 04:01:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.682 04:01:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.942 04:01:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:10:01.942 04:01:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:01.942 true 00:10:01.942 04:01:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:01.942 04:01:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.202 04:01:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.202 04:01:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:10:02.202 04:01:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:02.461 true 00:10:02.461 04:01:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:02.461 04:01:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.720 04:01:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.720 04:01:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:10:02.720 04:01:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:02.979 true 00:10:02.979 04:01:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:02.979 04:01:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.238 04:01:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.238 04:01:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:10:03.238 04:01:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:03.498 true 00:10:03.498 04:01:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:03.498 04:01:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.498 04:01:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.758 04:01:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:10:03.758 04:01:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:04.017 true 00:10:04.018 04:01:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:04.018 04:01:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.018 04:01:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.278 04:01:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:10:04.278 04:01:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:04.538 true 00:10:04.538 04:01:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:04.538 04:01:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.538 04:01:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.798 04:01:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:10:04.798 04:01:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:04.798 true 00:10:04.798 04:01:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:04.798 04:01:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.058 04:01:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.320 04:01:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:10:05.320 04:01:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:05.320 true 00:10:05.320 04:01:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:05.320 04:01:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.581 04:01:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.841 04:01:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:10:05.841 04:01:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:05.841 true 00:10:05.841 04:01:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:05.841 04:01:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.101 04:01:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.101 04:01:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:10:06.101 04:01:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:06.361 true 00:10:06.361 04:01:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:06.361 04:01:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.620 04:01:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.621 04:01:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:10:06.621 04:01:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:06.880 true 00:10:06.880 04:01:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:06.880 04:01:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.140 04:01:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.140 04:01:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:10:07.140 04:01:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:07.400 true 00:10:07.400 04:01:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:07.400 04:01:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.660 04:01:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.660 04:01:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:10:07.660 04:01:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:07.921 true 00:10:07.921 04:01:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:07.921 04:01:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.921 04:01:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.180 04:01:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:10:08.180 04:01:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:08.440 true 00:10:08.440 04:01:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:08.440 04:01:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.414 04:01:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.415 [2024-04-19 04:01:23.746580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.746640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.746684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.746713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.746747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.746783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.746819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.746852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.746882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.746912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.746944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.746978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.747965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.748961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.415 [2024-04-19 04:01:23.749637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.749669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.749697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.749723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.749754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.749784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.749816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.749845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.749894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.749933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.750984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.751968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.416 [2024-04-19 04:01:23.752941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.752979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.753986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.754986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.755970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.756003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.756034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.756066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.756100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.756134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.756161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.756192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.756220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.756243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.417 [2024-04-19 04:01:23.756272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.756990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:09.418 [2024-04-19 04:01:23.757715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.757983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.758990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.418 [2024-04-19 04:01:23.759642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.759675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.759702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.759728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.759756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.759784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.759941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.759982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.760968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.419 [2024-04-19 04:01:23.761674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.761703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.761733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.761758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.761785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.761807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.761844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.761871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.761897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.761926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.761954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.761988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.762971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.763996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.420 [2024-04-19 04:01:23.764740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.764770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.764795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.764824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.764855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.764885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.764912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.764939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.764966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.764998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.765983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.766985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.767979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.768005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.421 [2024-04-19 04:01:23.768031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.768843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.769985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.422 [2024-04-19 04:01:23.770728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.770755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.770784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.770810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.770842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.770869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.770904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.770938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.770979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.771977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.772988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 04:01:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:10:09.423 [2024-04-19 04:01:23.773744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.773981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.423 [2024-04-19 04:01:23.774008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 04:01:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:09.424 [2024-04-19 04:01:23.774069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.774981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.775991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.776026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.776053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.776079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.776110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.776145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.776179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.424 [2024-04-19 04:01:23.776215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.776966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.777969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.778903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.425 [2024-04-19 04:01:23.779672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.779702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.779737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.779772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.779804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.779836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.779864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.779892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.779921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.779964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.780987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.781986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.426 [2024-04-19 04:01:23.782423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.782969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.783985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.784971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.427 [2024-04-19 04:01:23.785753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.785784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.785814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.785847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.786979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.787974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.788993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.789022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.789055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.789088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.789123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.789160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.428 [2024-04-19 04:01:23.789192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.789980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.790975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.791979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.792009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.792037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.792070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.792100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.792133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.792162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.429 [2024-04-19 04:01:23.792194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.792847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:09.430 [2024-04-19 04:01:23.792886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.793976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.794985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.430 [2024-04-19 04:01:23.795758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.795788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.795820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.795851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.795881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.795914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.795945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.795976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.796972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.797968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.798989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.799019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.799046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.799071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.799098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.799129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.799162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.431 [2024-04-19 04:01:23.799199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.799970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.800983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.801972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.432 [2024-04-19 04:01:23.802632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.802664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.802700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.802738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.802772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.802803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.802838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.802871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.802902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.802930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.802957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.802987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.803974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.804975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.805982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.806012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.806046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.806076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.806109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.806139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.806167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.806195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.433 [2024-04-19 04:01:23.806229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.806979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.807891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.808988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.809018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.434 [2024-04-19 04:01:23.809050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.809982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.810967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.811978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.435 [2024-04-19 04:01:23.812631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.812665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.812694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.812725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.812755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.812787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.812818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.812852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.812890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.812927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.812954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.812983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.813991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.814975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.815973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.816006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.816040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.816075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.436 [2024-04-19 04:01:23.816117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.816961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.817995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.818984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.437 [2024-04-19 04:01:23.819560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.819987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.820984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.821981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.438 [2024-04-19 04:01:23.822627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.822659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.822806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.822841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.822876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.822913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.822947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.822979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.823981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.824992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.439 [2024-04-19 04:01:23.825871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.825903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.825934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.825969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.826984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.827979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:09.440 [2024-04-19 04:01:23.828544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.828994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.440 [2024-04-19 04:01:23.829390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.829997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.830974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.831990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.441 [2024-04-19 04:01:23.832898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.832929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.832984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.833980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.834972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.835989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.442 [2024-04-19 04:01:23.836461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.836994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.837973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.838976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.443 [2024-04-19 04:01:23.839511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.839935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.840976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.841971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.842987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.444 [2024-04-19 04:01:23.843015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.843997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.844970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.845975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.445 [2024-04-19 04:01:23.846540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.846569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.846603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.846643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.846679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.846720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.846751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.846777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.846936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.846985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.847976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.848993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.849980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.850016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.850049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.850083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.850117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.850154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.446 [2024-04-19 04:01:23.850190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.850977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.851990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.852988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.447 [2024-04-19 04:01:23.853664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.853694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.853732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.853766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.853949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.853982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.854989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.855983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.856973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.448 [2024-04-19 04:01:23.857612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.857991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.858990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.859983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.860857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.449 [2024-04-19 04:01:23.861627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.861656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.861688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.861719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.861746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.861773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.861805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.861835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.861868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.861902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.861938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.861969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.862972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.863994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.864987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 [2024-04-19 04:01:23.865582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.450 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:09.450 [2024-04-19 04:01:23.865610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.865771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.865801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.865830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.865859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.865892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.865930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.865965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.865998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.866972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.867928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.868995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.451 [2024-04-19 04:01:23.869551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.869979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.870998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.871969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.872976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.452 [2024-04-19 04:01:23.873448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.873982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.874991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.875990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.876995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.453 [2024-04-19 04:01:23.877698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.877727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.877764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.877797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.877825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.877858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.877889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.877930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.877961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.877996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.878972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.879986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.880968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.454 [2024-04-19 04:01:23.881997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.882996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.883980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.884988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.455 [2024-04-19 04:01:23.885980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.886973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.887916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.888986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.889987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.456 [2024-04-19 04:01:23.890606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.890985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.891968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.892984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.893994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.894918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.457 [2024-04-19 04:01:23.895413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.895952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.896980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.897983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.898014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.898042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.898072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.898110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.898140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.898171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.898205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.898234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.898263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.458 [2024-04-19 04:01:23.898293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.898993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.899977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.900964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.901000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.901034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.901067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.901096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.901122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.459 [2024-04-19 04:01:23.901155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:09.460 [2024-04-19 04:01:23.901945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.901975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.902985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 true 00:10:09.460 [2024-04-19 04:01:23.903204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.903984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.460 [2024-04-19 04:01:23.904709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.904739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.904770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.904800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.904831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.904865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.904892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.904924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.904952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.904982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.905981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.906998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.907981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.908013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.908042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.908069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.908098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.908127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.908160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.908194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.461 [2024-04-19 04:01:23.908220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.462 [2024-04-19 04:01:23.908253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.462 [2024-04-19 04:01:23.908286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.462 [2024-04-19 04:01:23.908321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.462 [2024-04-19 04:01:23.908354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.462 [2024-04-19 04:01:23.908387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.462 [2024-04-19 04:01:23.908418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.462 [2024-04-19 04:01:23.908452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.462 [2024-04-19 04:01:23.908482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.462 [2024-04-19 04:01:23.908512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.462 [2024-04-19 04:01:23.908548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.462 [2024-04-19 04:01:23.908579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.908609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.908645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.908693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.908725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.908773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.908803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.908956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.908990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.909914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.910980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.752 [2024-04-19 04:01:23.911710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.911743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.911772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.911804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.911835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.911868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.911900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.911932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.911961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.911996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.912986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.913990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.914987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.915019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.915051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.915083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.915113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.915142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.915172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.915202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.915230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.915260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.753 [2024-04-19 04:01:23.915289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.915997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.916988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.917995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.754 [2024-04-19 04:01:23.918771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.918800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.918832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.918863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.918896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.918927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.918967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.918997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.919985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.920961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.921968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.755 [2024-04-19 04:01:23.922376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.922975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.923912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.924979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.925011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.925053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.925087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.925240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.756 [2024-04-19 04:01:23.925276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 04:01:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:09.757 [2024-04-19 04:01:23.925516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 04:01:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.757 [2024-04-19 04:01:23.925837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.925982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.926985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.927996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.757 [2024-04-19 04:01:23.928707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.928859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.928889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.928922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.928956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.928992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.929986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.930988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.931995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.932027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.932069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.932100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.932255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.932291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.758 [2024-04-19 04:01:23.932320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.932987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.933996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.934985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.759 [2024-04-19 04:01:23.935898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.935935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.935970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.936993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.937855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:09.760 [2024-04-19 04:01:23.938001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.938998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.939033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.939065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.939097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.939225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.939268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.939305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.939341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.939377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.939422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.760 [2024-04-19 04:01:23.939462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.939981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.940997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.941978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.761 [2024-04-19 04:01:23.942387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.942981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.943971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.944971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.762 [2024-04-19 04:01:23.945882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.945922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.945958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.946998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.947991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.948980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.763 [2024-04-19 04:01:23.949483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.949977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.950975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.951982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.952937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.953079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.953113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.764 [2024-04-19 04:01:23.953150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.953984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.954986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.955987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.956023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.956052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.765 [2024-04-19 04:01:23.956082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.956991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.957978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.958971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.766 [2024-04-19 04:01:23.959591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.959624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.959659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.959685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.959721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.959750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.959793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.959825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.959960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.960946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.961983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.962983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.963013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.963042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.963072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.767 [2024-04-19 04:01:23.963106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.963987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.964978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.965987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.768 [2024-04-19 04:01:23.966618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.966648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.966817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.966849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.966875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.966911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.966953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.966987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.967910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.968993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.969974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.970007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.970039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.970072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.970111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.769 [2024-04-19 04:01:23.970142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.770 [2024-04-19 04:01:23.970917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.970953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.970985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.971970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.972977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:09.771 [2024-04-19 04:01:23.973750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.973998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.974032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.974069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.974101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.974138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.974168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.974198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.974228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.974259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.974292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.974327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.771 [2024-04-19 04:01:23.974358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.974971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.975926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.976976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.772 [2024-04-19 04:01:23.977887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.977920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.977951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.977984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.978974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.979990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.980974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.773 [2024-04-19 04:01:23.981465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.981493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.981521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.981557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.981588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.981637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.981667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.981849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.981895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.981926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.981961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.981998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.982999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.983972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.774 [2024-04-19 04:01:23.984944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.984975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.985987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.986984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.775 [2024-04-19 04:01:23.987994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.988984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.989997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.990990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.776 [2024-04-19 04:01:23.991561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.991984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.992991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.993989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.994998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.777 [2024-04-19 04:01:23.995030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.995988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.996980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.997968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.778 [2024-04-19 04:01:23.998564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.998596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.998629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.998663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.998696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.998727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.998758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.998788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.998817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.998843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.998987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:23.999970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.000978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.779 [2024-04-19 04:01:24.001885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.001909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.001944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.001972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.002969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.003976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.004972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.005000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.005039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.005068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.005099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.780 [2024-04-19 04:01:24.005132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.005972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.006891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.007954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.781 [2024-04-19 04:01:24.008560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.008971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.009987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:09.782 [2024-04-19 04:01:24.010562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.010969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.011976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.012005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.012036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.012064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.782 [2024-04-19 04:01:24.012095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.012984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.013935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.014968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.783 [2024-04-19 04:01:24.015700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.015732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.015760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.015792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.015825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.015857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.015890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.015925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.015958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.015996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.016980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.017990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.784 [2024-04-19 04:01:24.018783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.018821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.018852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.018881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.018915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.018948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.018981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.019971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.020956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.021998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.785 [2024-04-19 04:01:24.022660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.022688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.022717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.022746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.022775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.022807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.022836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.022867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.022905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.022939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.022970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.023981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.024970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.025990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.786 [2024-04-19 04:01:24.026461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.026987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.027992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.028972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.029979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.787 [2024-04-19 04:01:24.030648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.030680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.030709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.030745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.030774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.030803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.030837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.030873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.030902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.030930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.030958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.030989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.031997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.032998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.033994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.034033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.034072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.034110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.034144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.034180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.034206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.788 [2024-04-19 04:01:24.034235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.034992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.035991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.036973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.037991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.038021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.038056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.038088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.038120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.038157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.038190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.038217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.038245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.038278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.038315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.789 [2024-04-19 04:01:24.038346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.038995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.039976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.040986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.041967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.790 [2024-04-19 04:01:24.042569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.042975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.043985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.044972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.045989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.791 [2024-04-19 04:01:24.046521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.046555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.046591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.046640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 Message suppressed 999 times: [2024-04-19 04:01:24.046674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 Read completed with error (sct=0, sc=15) 00:10:09.792 [2024-04-19 04:01:24.046815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.046851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.046889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.046924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.046963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.047884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.048978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.049984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.050016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.050046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.050075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.050109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.050142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.050172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.050337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.050369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.050417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.050453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.792 [2024-04-19 04:01:24.050487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.050981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.051999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.052979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.053971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.793 [2024-04-19 04:01:24.054829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.054863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.055996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.056983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.057968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.058001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.058030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.058055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.058083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.058112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.058143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.058172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.058205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.058236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.058267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.058424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.794 04:01:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.794 [2024-04-19 04:01:24.231748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.231796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.231834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.231870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.231895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.231921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.231948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.231978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.794 [2024-04-19 04:01:24.232381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.232954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.233937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.234979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.795 [2024-04-19 04:01:24.235012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.235986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.236999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.796 [2024-04-19 04:01:24.237678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.237714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.237748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.237781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.237814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.237842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.237868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.237897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.237928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.237959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.237986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.238988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.239974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.797 [2024-04-19 04:01:24.240739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.240766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.240794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.240830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.241990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.242996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.798 [2024-04-19 04:01:24.243966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.243992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.244980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.245998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.246989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.247025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.247060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.247089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.247116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.247144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.247174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.247204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.247233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.247262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.799 [2024-04-19 04:01:24.247290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.247998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.248853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:09.800 [2024-04-19 04:01:24.248886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.249927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.800 [2024-04-19 04:01:24.250482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.801 [2024-04-19 04:01:24.250514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.801 [2024-04-19 04:01:24.250545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.801 [2024-04-19 04:01:24.250578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.801 [2024-04-19 04:01:24.250607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.801 [2024-04-19 04:01:24.250636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.801 [2024-04-19 04:01:24.250668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.801 [2024-04-19 04:01:24.250702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.801 [2024-04-19 04:01:24.250731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.801 [2024-04-19 04:01:24.250765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.801 [2024-04-19 04:01:24.250798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:09.801 [2024-04-19 04:01:24.250826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.250863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.250901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.250938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.250972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.251992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.252973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.253980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.254008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.254043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.093 [2024-04-19 04:01:24.254073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.254974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.255988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.256977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.094 [2024-04-19 04:01:24.257460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.257972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.258978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 04:01:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:10:10.095 [2024-04-19 04:01:24.259058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 04:01:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:10.095 [2024-04-19 04:01:24.259370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.259999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.095 [2024-04-19 04:01:24.260970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.261984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.262992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.263865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.096 [2024-04-19 04:01:24.264504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.264904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.265973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.266989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.267024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.267060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.267087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.267134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.267167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.267206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.097 [2024-04-19 04:01:24.267237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.267999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.268971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.269983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.098 [2024-04-19 04:01:24.270897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.270936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.270971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.271908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.272990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.273971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.274001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.274031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.274071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.099 [2024-04-19 04:01:24.274116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.274977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.275995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.276997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.100 [2024-04-19 04:01:24.277476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.277985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.278985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.279885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.280955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.101 [2024-04-19 04:01:24.281096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.281987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.282983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.283017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.283051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.283083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.283117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.102 [2024-04-19 04:01:24.283152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.283987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:10.103 [2024-04-19 04:01:24.284628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.284966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.285972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.103 [2024-04-19 04:01:24.286382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.286986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.287982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.288993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.289028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.289175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.289206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.289252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.289285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.289317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.289352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.289387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.104 [2024-04-19 04:01:24.289423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.289987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.290993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.291967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.292001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.292030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.292062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.292093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.292122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.292153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.292182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.292214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.292246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.105 [2024-04-19 04:01:24.292281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.292964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.293975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.294998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.106 [2024-04-19 04:01:24.295824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.295854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.295883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.295910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.295961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.295992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.296977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.107 [2024-04-19 04:01:24.297927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.297959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.297988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.298989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.299994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.300978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.108 [2024-04-19 04:01:24.301607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.301638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.301688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.301725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.301876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.301910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.301937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.301972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.302994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.303977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.109 [2024-04-19 04:01:24.304597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.304982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.305981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.306989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.110 [2024-04-19 04:01:24.307915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.307942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.307972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.308977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.309973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.310977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.111 [2024-04-19 04:01:24.311448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.311993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.312994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.313994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.314023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.314054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.314079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.314115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.314147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.314180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.314226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.314264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.314424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.112 [2024-04-19 04:01:24.314462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.314984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.315991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.316996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.317997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.318025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.318051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.318075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.113 [2024-04-19 04:01:24.318107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.318914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.319984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:10.114 [2024-04-19 04:01:24.320205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.320985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.114 [2024-04-19 04:01:24.321848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.321879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.321916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.321945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.321973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.322985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.323970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.324969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.115 [2024-04-19 04:01:24.325564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.325594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.325630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.325663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.325695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.325727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.325754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.325794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.325824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.325974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.326979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.327996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.116 [2024-04-19 04:01:24.328703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.328730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.328765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.328801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.328831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.328862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.328896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.328931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.328965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.328997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.329962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.330997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.331970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.332012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.332041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.332069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.332098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.332130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.332161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.332194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.117 [2024-04-19 04:01:24.332224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.332991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.333971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.334980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.118 [2024-04-19 04:01:24.335756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.335792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.335823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.335857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.335887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.335923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.335954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.335983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.336990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.337973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.338992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.119 [2024-04-19 04:01:24.339356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.339997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.340973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.341911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.342998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.343046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.343074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.343199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.343236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.343271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.120 [2024-04-19 04:01:24.343304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.343995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.344991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.345993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.121 [2024-04-19 04:01:24.346665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.346703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.346738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.346775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.346806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.346838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.346869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.346898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.346930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.346962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.346995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.347985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.348973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.349962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.350127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.350160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.350189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.350217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.350246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.350276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.350309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.350344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.350371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.122 [2024-04-19 04:01:24.350406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.350990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.351992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.352988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.353969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.354002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.354037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.354067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.354101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.354133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.354162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.354190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.354216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.123 [2024-04-19 04:01:24.354250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.354994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.355970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:10.124 [2024-04-19 04:01:24.356829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.356868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.357994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.124 [2024-04-19 04:01:24.358522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.358999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.359972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.360979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.361996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.362034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.362073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.362106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.362142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.362175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.125 [2024-04-19 04:01:24.362205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.362975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.363888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.364980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.365972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.126 [2024-04-19 04:01:24.366751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.366782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.366814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.366843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.366875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.366905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.366937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.366970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.367979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.368998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.369986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.370901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.371048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.371095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.371131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.127 [2024-04-19 04:01:24.371166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.371997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.372982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.373985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.374988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.375018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.375048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.375079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.375111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.375141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.375172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.375205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.375235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.375266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.128 [2024-04-19 04:01:24.375301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.375979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.376862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.377917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.378981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.129 [2024-04-19 04:01:24.379846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.379874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.379904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.379934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.379970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.380988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.381971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.382978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.383988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.130 [2024-04-19 04:01:24.384765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.384795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.384820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.384853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.384880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.384910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.384942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.385994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.386986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.387980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.388975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.131 [2024-04-19 04:01:24.389801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.389837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.389869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.389901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.389931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.389966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.389997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.390973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.391920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.392973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:10.132 [2024-04-19 04:01:24.393226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.393993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.394023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.132 [2024-04-19 04:01:24.394053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.394974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.395972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.396974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.397989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.398889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.399038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.399068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.399109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.133 [2024-04-19 04:01:24.399140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.399995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.400991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.401983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.134 [2024-04-19 04:01:24.402780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.402811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.402837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.402868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.402902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.402933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.402961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.402991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.403984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 true 00:10:10.135 [2024-04-19 04:01:24.404598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.404984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.405984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.406013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.406044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.406073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.406104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.406138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.135 [2024-04-19 04:01:24.406171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.406909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.407975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.408986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.409011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.136 [2024-04-19 04:01:24.409039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.409995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.410992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.411998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.137 [2024-04-19 04:01:24.412417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.412980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.413987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.414997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.138 [2024-04-19 04:01:24.415863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.416974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.417971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.418973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.139 [2024-04-19 04:01:24.419479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.419963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.420984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.421991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.140 [2024-04-19 04:01:24.422384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.422985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.423981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.424934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.141 [2024-04-19 04:01:24.425826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.425857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.425889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.425919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.425948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.425979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.426994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.427974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:10.142 [2024-04-19 04:01:24.428585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 04:01:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:10.142 [2024-04-19 04:01:24.428647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 04:01:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.142 [2024-04-19 04:01:24.428946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.428974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.429003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.429035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.429069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.429100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.429131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.429164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.429199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.429226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.142 [2024-04-19 04:01:24.429251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.429997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.430980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.431839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.143 [2024-04-19 04:01:24.432884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.432916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.432953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.432986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.433987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.434984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.435996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.144 [2024-04-19 04:01:24.436436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.436968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.437969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.438978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.145 [2024-04-19 04:01:24.439481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.439891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.440976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.441970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.146 [2024-04-19 04:01:24.442907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.442937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.442970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.443995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.444983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.445993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.147 [2024-04-19 04:01:24.446482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.446982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.447893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.448969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.148 [2024-04-19 04:01:24.449764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.449797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.449829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.449863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.449894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.449928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.449958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.449992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.450974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.451969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.149 [2024-04-19 04:01:24.452676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.452706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.452738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.452773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.452803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.452834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.452865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.452895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.452928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.452968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.452996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.453976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.454999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.150 [2024-04-19 04:01:24.455906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.455940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.456974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.457996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.458989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.151 [2024-04-19 04:01:24.459385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.459986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.460980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.461995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.152 [2024-04-19 04:01:24.462709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.462762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.462802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.462832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.462859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.462886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.463962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:10.153 [2024-04-19 04:01:24.464107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.464986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.465978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.466010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.466042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.153 [2024-04-19 04:01:24.466075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.466961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.467997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.154 [2024-04-19 04:01:24.468991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.469968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.470906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.471991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.472133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.472173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.472217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.472247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.472280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.472309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.155 [2024-04-19 04:01:24.472342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.472993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.473980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.474984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.156 [2024-04-19 04:01:24.475837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.475871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.475903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.475934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.475963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.475990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.476993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.477993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.478879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.157 [2024-04-19 04:01:24.479370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.479922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.480991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.481991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.158 [2024-04-19 04:01:24.482878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.482909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.482951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.482990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.483990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.484992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.159 [2024-04-19 04:01:24.485826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.485855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.485983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.486920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.487974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.488973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.489004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.489033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.489063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.489090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.489143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.489176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.489206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.489237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.489369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.160 [2024-04-19 04:01:24.489398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.489996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.490968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.491975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.161 [2024-04-19 04:01:24.492944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.492977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.493872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.494967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.495987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.496017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.496067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.496097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.496225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.496267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.496302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.496334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.496361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.496386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.496418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.162 [2024-04-19 04:01:24.496446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.496997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.497974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.498985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.499016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.499052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.499086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.499116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.499146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.499174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.499205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.499236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.499268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.499301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.163 [2024-04-19 04:01:24.499331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.499980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:10.164 [2024-04-19 04:01:24.500645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.500971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.501894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.164 [2024-04-19 04:01:24.502809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.502859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.502896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.502935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.502966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.503984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.504983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.505982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.506014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.506044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.506081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.506109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.506140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.506170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.506200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.506250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.506280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.506320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.165 [2024-04-19 04:01:24.506351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.506985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.507977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.508977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.166 [2024-04-19 04:01:24.509877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.509910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.510977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.511973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.512980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.513010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.513043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.513079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.513112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.513145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.513180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.513237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.513270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.513302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.513328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.167 [2024-04-19 04:01:24.513454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.513972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.514985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.515999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.516033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.516065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.516107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.516136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.516171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.516203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.516232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.516268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.516297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.168 [2024-04-19 04:01:24.516331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.516969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.517990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.518958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.519995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.520029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.169 [2024-04-19 04:01:24.520063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.520970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.521981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.522999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.523970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.524016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.170 [2024-04-19 04:01:24.524048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.524865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.525973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.526979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.171 [2024-04-19 04:01:24.527792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.527818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.527844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.527875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.527904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.527935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.527964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.527996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.528985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.529994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.530877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.531026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.531059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.531091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.531122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.172 [2024-04-19 04:01:24.531153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.531983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.532969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.533970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.173 [2024-04-19 04:01:24.534929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.534959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.534994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.535975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:10.174 [2024-04-19 04:01:24.536824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.536987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.174 [2024-04-19 04:01:24.537884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.537919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.538984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.539986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.175 [2024-04-19 04:01:24.540789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.540823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.540857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.540894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.540926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.540953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.540976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.541994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.542986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.176 [2024-04-19 04:01:24.543862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.543897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.543936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.543962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.543989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.544894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.545958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.546005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.546033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.546059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.546090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.546216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.177 [2024-04-19 04:01:24.546251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.546979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.547982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.548977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.549008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.549036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.549066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.549095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.549126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.549160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.549193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.549221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.549252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.178 [2024-04-19 04:01:24.549286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.549985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.550972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.551975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.179 [2024-04-19 04:01:24.552416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.552990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.553999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.180 [2024-04-19 04:01:24.554727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.554756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.554783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.554815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.554849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.554877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.554905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.554939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.554977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.555978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.556969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.557973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.558010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.558041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.558070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.558101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.558128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.558156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.558185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.181 [2024-04-19 04:01:24.558216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.558904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.559965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.560982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.182 [2024-04-19 04:01:24.561012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.561971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.562968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.563990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 [2024-04-19 04:01:24.564560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:10.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.444 04:01:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.444 04:01:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:10:10.444 04:01:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:10.444 true 00:10:10.444 04:01:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:10.444 04:01:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.704 04:01:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.963 04:01:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:10:10.963 04:01:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:10.963 true 00:10:10.963 04:01:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:10.963 04:01:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.223 04:01:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.483 04:01:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:10:11.483 04:01:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:11.483 true 00:10:11.483 04:01:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:11.483 04:01:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.744 04:01:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.744 04:01:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:10:11.744 04:01:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:12.004 true 00:10:12.004 04:01:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:12.004 04:01:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.264 04:01:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.264 04:01:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:10:12.264 04:01:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:12.523 true 00:10:12.523 04:01:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:12.523 04:01:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.783 04:01:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.783 04:01:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:10:12.783 04:01:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:13.043 true 00:10:13.043 04:01:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:13.043 04:01:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.303 04:01:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.303 04:01:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:10:13.303 04:01:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:13.563 true 00:10:13.563 04:01:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:13.563 04:01:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.563 04:01:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.823 04:01:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:10:13.823 04:01:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:14.081 true 00:10:14.082 04:01:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:14.082 04:01:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.082 04:01:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.341 04:01:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:10:14.341 04:01:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:14.599 true 00:10:14.599 04:01:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:14.599 04:01:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.599 04:01:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.858 04:01:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:10:14.858 04:01:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:14.858 true 00:10:15.117 04:01:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:15.117 04:01:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.117 04:01:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.375 04:01:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:10:15.375 04:01:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:15.375 true 00:10:15.375 04:01:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:15.375 04:01:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.634 04:01:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.893 04:01:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:10:15.893 04:01:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:15.893 true 00:10:15.893 04:01:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:15.893 04:01:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.151 04:01:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.410 04:01:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:10:16.410 04:01:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:16.410 true 00:10:16.410 04:01:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:16.410 04:01:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.669 04:01:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.928 04:01:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:10:16.928 04:01:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:16.929 true 00:10:16.929 04:01:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:16.929 04:01:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.189 04:01:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.448 04:01:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:10:17.448 04:01:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:17.448 true 00:10:17.448 04:01:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:17.448 04:01:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.708 04:01:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.708 04:01:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:10:17.708 04:01:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:17.968 true 00:10:17.968 04:01:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:17.968 04:01:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.228 04:01:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.228 04:01:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:10:18.228 04:01:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:18.488 true 00:10:18.488 04:01:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:18.488 04:01:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.760 04:01:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.760 04:01:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:10:18.760 04:01:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:19.021 true 00:10:19.021 04:01:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:19.021 04:01:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.021 04:01:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.281 04:01:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:10:19.281 04:01:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:19.540 true 00:10:19.540 04:01:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:19.540 04:01:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.540 04:01:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.800 04:01:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:10:19.800 04:01:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:19.800 true 00:10:20.060 04:01:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:20.060 04:01:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.060 04:01:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.319 04:01:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:10:20.319 04:01:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:20.319 true 00:10:20.319 04:01:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:20.319 04:01:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.579 04:01:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.839 04:01:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:10:20.839 04:01:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:20.839 true 00:10:20.839 04:01:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:20.839 04:01:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.099 04:01:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.099 04:01:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:10:21.099 04:01:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:21.359 true 00:10:21.359 04:01:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:21.359 04:01:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.298 04:01:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.298 04:01:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:10:22.298 04:01:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:22.558 true 00:10:22.558 04:01:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:22.558 04:01:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.817 04:01:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.817 04:01:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:10:22.817 04:01:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:23.076 true 00:10:23.076 04:01:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:23.076 04:01:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.335 04:01:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.335 04:01:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:10:23.335 04:01:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:23.595 true 00:10:23.595 04:01:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:23.595 04:01:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.595 04:01:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.854 04:01:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:10:23.854 04:01:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:24.113 true 00:10:24.113 04:01:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:24.113 04:01:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.113 04:01:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.373 Initializing NVMe Controllers 00:10:24.373 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:24.373 Controller IO queue size 128, less than required. 00:10:24.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:24.373 Controller IO queue size 128, less than required. 00:10:24.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:24.373 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:24.373 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:24.373 Initialization complete. Launching workers. 00:10:24.373 ======================================================== 00:10:24.373 Latency(us) 00:10:24.373 Device Information : IOPS MiB/s Average min max 00:10:24.373 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2610.20 1.27 18211.46 886.06 1006478.24 00:10:24.373 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15941.47 7.78 8029.37 1664.42 271726.65 00:10:24.373 ======================================================== 00:10:24.373 Total : 18551.67 9.06 9461.98 886.06 1006478.24 00:10:24.373 00:10:24.373 04:01:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:10:24.373 04:01:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:24.632 true 00:10:24.632 04:01:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 204259 00:10:24.632 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (204259) - No such process 00:10:24.632 04:01:38 -- target/ns_hotplug_stress.sh@44 -- # wait 204259 00:10:24.632 04:01:38 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:24.632 04:01:38 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:10:24.632 04:01:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:24.632 04:01:38 -- nvmf/common.sh@117 -- # sync 00:10:24.632 04:01:38 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:24.632 04:01:38 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:24.632 04:01:38 -- nvmf/common.sh@120 -- # set +e 00:10:24.632 04:01:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.632 04:01:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:24.632 rmmod nvme_rdma 00:10:24.632 rmmod nvme_fabrics 00:10:24.632 04:01:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.632 04:01:38 -- nvmf/common.sh@124 -- # set -e 00:10:24.632 04:01:38 -- nvmf/common.sh@125 -- # return 0 00:10:24.632 04:01:38 -- nvmf/common.sh@478 -- # '[' -n 203818 ']' 00:10:24.632 04:01:38 -- nvmf/common.sh@479 -- # killprocess 203818 00:10:24.632 04:01:38 -- common/autotest_common.sh@936 -- # '[' -z 203818 ']' 00:10:24.632 04:01:38 -- common/autotest_common.sh@940 -- # kill -0 203818 00:10:24.632 04:01:38 -- common/autotest_common.sh@941 -- # uname 00:10:24.632 04:01:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:24.632 04:01:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 203818 00:10:24.632 04:01:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:24.632 04:01:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:24.632 04:01:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 203818' 00:10:24.632 killing process with pid 203818 00:10:24.632 04:01:39 -- common/autotest_common.sh@955 -- # kill 203818 00:10:24.632 04:01:39 -- common/autotest_common.sh@960 -- # wait 203818 00:10:24.892 04:01:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:24.892 04:01:39 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:24.892 00:10:24.892 real 0m40.724s 00:10:24.892 user 2m41.303s 00:10:24.892 sys 0m8.480s 00:10:24.892 04:01:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:24.892 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:10:24.892 ************************************ 00:10:24.892 END TEST nvmf_ns_hotplug_stress 00:10:24.892 ************************************ 00:10:24.892 04:01:39 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:24.892 04:01:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:24.892 04:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.892 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:10:25.152 ************************************ 00:10:25.152 START TEST nvmf_connect_stress 00:10:25.152 ************************************ 00:10:25.152 04:01:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:25.152 * Looking for test storage... 00:10:25.152 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:25.152 04:01:39 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.152 04:01:39 -- nvmf/common.sh@7 -- # uname -s 00:10:25.152 04:01:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.152 04:01:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.152 04:01:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.152 04:01:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.152 04:01:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.152 04:01:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.152 04:01:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.152 04:01:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.152 04:01:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.152 04:01:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.152 04:01:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:25.152 04:01:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:25.152 04:01:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.152 04:01:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.152 04:01:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.152 04:01:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.152 04:01:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:25.152 04:01:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.152 04:01:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.152 04:01:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.152 04:01:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.152 04:01:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.152 04:01:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.152 04:01:39 -- paths/export.sh@5 -- # export PATH 00:10:25.152 04:01:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.152 04:01:39 -- nvmf/common.sh@47 -- # : 0 00:10:25.152 04:01:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:25.152 04:01:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:25.152 04:01:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.152 04:01:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.152 04:01:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.152 04:01:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:25.152 04:01:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:25.152 04:01:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:25.152 04:01:39 -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:25.152 04:01:39 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:25.152 04:01:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.152 04:01:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:25.152 04:01:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:25.152 04:01:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:25.152 04:01:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.152 04:01:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.152 04:01:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.152 04:01:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:25.152 04:01:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:25.152 04:01:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:25.152 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:10:30.444 04:01:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:30.444 04:01:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:30.444 04:01:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:30.444 04:01:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:30.444 04:01:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:30.444 04:01:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:30.444 04:01:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:30.444 04:01:44 -- nvmf/common.sh@295 -- # net_devs=() 00:10:30.444 04:01:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:30.444 04:01:44 -- nvmf/common.sh@296 -- # e810=() 00:10:30.444 04:01:44 -- nvmf/common.sh@296 -- # local -ga e810 00:10:30.444 04:01:44 -- nvmf/common.sh@297 -- # x722=() 00:10:30.444 04:01:44 -- nvmf/common.sh@297 -- # local -ga x722 00:10:30.444 04:01:44 -- nvmf/common.sh@298 -- # mlx=() 00:10:30.444 04:01:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:30.444 04:01:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.444 04:01:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.444 04:01:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.444 04:01:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.444 04:01:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.444 04:01:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.444 04:01:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.444 04:01:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.444 04:01:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.444 04:01:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.444 04:01:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.444 04:01:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:30.444 04:01:44 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:30.444 04:01:44 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:30.444 04:01:44 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:30.444 04:01:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:30.444 04:01:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:30.444 04:01:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:30.444 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:30.444 04:01:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:30.444 04:01:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:30.444 04:01:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:30.444 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:30.444 04:01:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:30.444 04:01:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:30.444 04:01:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:30.445 04:01:44 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.445 04:01:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:30.445 04:01:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.445 04:01:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:30.445 Found net devices under 0000:18:00.0: mlx_0_0 00:10:30.445 04:01:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.445 04:01:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.445 04:01:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:30.445 04:01:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.445 04:01:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:30.445 Found net devices under 0000:18:00.1: mlx_0_1 00:10:30.445 04:01:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.445 04:01:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:30.445 04:01:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:30.445 04:01:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:30.445 04:01:44 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:30.445 04:01:44 -- nvmf/common.sh@58 -- # uname 00:10:30.445 04:01:44 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:30.445 04:01:44 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:30.445 04:01:44 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:30.445 04:01:44 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:30.445 04:01:44 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:30.445 04:01:44 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:30.445 04:01:44 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:30.445 04:01:44 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:30.445 04:01:44 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:30.445 04:01:44 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:30.445 04:01:44 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:30.445 04:01:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:30.445 04:01:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:30.445 04:01:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:30.445 04:01:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:30.445 04:01:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:30.445 04:01:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:30.445 04:01:44 -- nvmf/common.sh@105 -- # continue 2 00:10:30.445 04:01:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:30.445 04:01:44 -- nvmf/common.sh@105 -- # continue 2 00:10:30.445 04:01:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:30.445 04:01:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:30.445 04:01:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:30.445 04:01:44 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:30.445 04:01:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:30.445 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:30.445 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:30.445 altname enp24s0f0np0 00:10:30.445 altname ens785f0np0 00:10:30.445 inet 192.168.100.8/24 scope global mlx_0_0 00:10:30.445 valid_lft forever preferred_lft forever 00:10:30.445 04:01:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:30.445 04:01:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:30.445 04:01:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:30.445 04:01:44 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:30.445 04:01:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:30.445 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:30.445 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:30.445 altname enp24s0f1np1 00:10:30.445 altname ens785f1np1 00:10:30.445 inet 192.168.100.9/24 scope global mlx_0_1 00:10:30.445 valid_lft forever preferred_lft forever 00:10:30.445 04:01:44 -- nvmf/common.sh@411 -- # return 0 00:10:30.445 04:01:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:30.445 04:01:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:30.445 04:01:44 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:30.445 04:01:44 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:30.445 04:01:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:30.445 04:01:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:30.445 04:01:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:30.445 04:01:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:30.445 04:01:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:30.445 04:01:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:30.445 04:01:44 -- nvmf/common.sh@105 -- # continue 2 00:10:30.445 04:01:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:30.445 04:01:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:30.445 04:01:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:30.445 04:01:44 -- nvmf/common.sh@105 -- # continue 2 00:10:30.445 04:01:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:30.445 04:01:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:30.445 04:01:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:30.445 04:01:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:30.445 04:01:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:30.445 04:01:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:30.445 04:01:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:30.445 04:01:44 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:30.445 192.168.100.9' 00:10:30.445 04:01:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:30.445 192.168.100.9' 00:10:30.445 04:01:44 -- nvmf/common.sh@446 -- # head -n 1 00:10:30.445 04:01:44 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:30.445 04:01:44 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:30.445 192.168.100.9' 00:10:30.445 04:01:44 -- nvmf/common.sh@447 -- # tail -n +2 00:10:30.445 04:01:44 -- nvmf/common.sh@447 -- # head -n 1 00:10:30.445 04:01:44 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:30.445 04:01:44 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:30.445 04:01:44 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:30.445 04:01:44 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:30.445 04:01:44 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:30.445 04:01:44 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:30.445 04:01:44 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:30.445 04:01:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:30.445 04:01:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:30.445 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:10:30.445 04:01:44 -- nvmf/common.sh@470 -- # nvmfpid=213682 00:10:30.445 04:01:44 -- nvmf/common.sh@471 -- # waitforlisten 213682 00:10:30.445 04:01:44 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:30.445 04:01:44 -- common/autotest_common.sh@817 -- # '[' -z 213682 ']' 00:10:30.445 04:01:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.445 04:01:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:30.445 04:01:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.445 04:01:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:30.445 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:10:30.445 [2024-04-19 04:01:44.902345] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:10:30.445 [2024-04-19 04:01:44.902388] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.445 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.445 [2024-04-19 04:01:44.953497] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.705 [2024-04-19 04:01:45.026566] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.705 [2024-04-19 04:01:45.026600] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.705 [2024-04-19 04:01:45.026606] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.705 [2024-04-19 04:01:45.026611] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.705 [2024-04-19 04:01:45.026616] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.705 [2024-04-19 04:01:45.026722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.705 [2024-04-19 04:01:45.026810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.705 [2024-04-19 04:01:45.026811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.276 04:01:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:31.276 04:01:45 -- common/autotest_common.sh@850 -- # return 0 00:10:31.276 04:01:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:31.276 04:01:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:31.276 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:10:31.276 04:01:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.276 04:01:45 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:31.276 04:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.276 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:10:31.276 [2024-04-19 04:01:45.746933] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e9dee0/0x1ea23d0) succeed. 00:10:31.276 [2024-04-19 04:01:45.755994] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e9f430/0x1ee3a60) succeed. 00:10:31.537 04:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:31.537 04:01:45 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:31.537 04:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.537 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:10:31.537 04:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:31.537 04:01:45 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:31.537 04:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.537 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:10:31.537 [2024-04-19 04:01:45.864678] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:31.537 04:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:31.537 04:01:45 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:31.537 04:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.537 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:10:31.537 NULL1 00:10:31.537 04:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:31.537 04:01:45 -- target/connect_stress.sh@21 -- # PERF_PID=213828 00:10:31.537 04:01:45 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:31.537 04:01:45 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:31.537 04:01:45 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # seq 1 20 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.537 04:01:45 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:31.537 04:01:45 -- target/connect_stress.sh@28 -- # cat 00:10:31.538 04:01:45 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:31.538 04:01:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.538 04:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.538 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:10:31.797 04:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:31.798 04:01:46 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:31.798 04:01:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.798 04:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.798 04:01:46 -- common/autotest_common.sh@10 -- # set +x 00:10:32.366 04:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:32.366 04:01:46 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:32.366 04:01:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.366 04:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:32.366 04:01:46 -- common/autotest_common.sh@10 -- # set +x 00:10:32.626 04:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:32.626 04:01:46 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:32.626 04:01:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.626 04:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:32.626 04:01:46 -- common/autotest_common.sh@10 -- # set +x 00:10:32.886 04:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:32.886 04:01:47 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:32.886 04:01:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.886 04:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:32.886 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:10:33.145 04:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:33.145 04:01:47 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:33.145 04:01:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.145 04:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:33.146 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:10:33.405 04:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:33.405 04:01:47 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:33.405 04:01:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.405 04:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:33.405 04:01:47 -- common/autotest_common.sh@10 -- # set +x 00:10:33.974 04:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:33.974 04:01:48 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:33.974 04:01:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.974 04:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:33.974 04:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:34.234 04:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:34.234 04:01:48 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:34.234 04:01:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.234 04:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:34.234 04:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:34.493 04:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:34.493 04:01:48 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:34.493 04:01:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.493 04:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:34.493 04:01:48 -- common/autotest_common.sh@10 -- # set +x 00:10:34.753 04:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:34.753 04:01:49 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:34.753 04:01:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.753 04:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:34.753 04:01:49 -- common/autotest_common.sh@10 -- # set +x 00:10:35.015 04:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:35.015 04:01:49 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:35.015 04:01:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.015 04:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:35.015 04:01:49 -- common/autotest_common.sh@10 -- # set +x 00:10:35.586 04:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:35.586 04:01:49 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:35.586 04:01:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.586 04:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:35.586 04:01:49 -- common/autotest_common.sh@10 -- # set +x 00:10:35.846 04:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:35.846 04:01:50 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:35.846 04:01:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.846 04:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:35.846 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:10:36.105 04:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:36.105 04:01:50 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:36.105 04:01:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.105 04:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:36.105 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:10:36.365 04:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:36.365 04:01:50 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:36.365 04:01:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.365 04:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:36.365 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:10:36.625 04:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:36.625 04:01:51 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:36.625 04:01:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:36.625 04:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:36.625 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:10:37.195 04:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:37.195 04:01:51 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:37.195 04:01:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.195 04:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:37.195 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:10:37.455 04:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:37.455 04:01:51 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:37.455 04:01:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.455 04:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:37.455 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:10:37.714 04:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:37.714 04:01:52 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:37.714 04:01:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.714 04:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:37.714 04:01:52 -- common/autotest_common.sh@10 -- # set +x 00:10:37.983 04:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:37.983 04:01:52 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:37.983 04:01:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:37.983 04:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:37.983 04:01:52 -- common/autotest_common.sh@10 -- # set +x 00:10:38.243 04:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:38.243 04:01:52 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:38.243 04:01:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.243 04:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.243 04:01:52 -- common/autotest_common.sh@10 -- # set +x 00:10:38.815 04:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:38.815 04:01:53 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:38.815 04:01:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:38.815 04:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.815 04:01:53 -- common/autotest_common.sh@10 -- # set +x 00:10:39.074 04:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.074 04:01:53 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:39.074 04:01:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.074 04:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.074 04:01:53 -- common/autotest_common.sh@10 -- # set +x 00:10:39.332 04:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.332 04:01:53 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:39.332 04:01:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.332 04:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.332 04:01:53 -- common/autotest_common.sh@10 -- # set +x 00:10:39.591 04:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.591 04:01:54 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:39.591 04:01:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.591 04:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.591 04:01:54 -- common/autotest_common.sh@10 -- # set +x 00:10:39.850 04:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.850 04:01:54 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:39.850 04:01:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:39.850 04:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.850 04:01:54 -- common/autotest_common.sh@10 -- # set +x 00:10:40.430 04:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.430 04:01:54 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:40.430 04:01:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.430 04:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.430 04:01:54 -- common/autotest_common.sh@10 -- # set +x 00:10:40.692 04:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.692 04:01:55 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:40.692 04:01:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.692 04:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.692 04:01:55 -- common/autotest_common.sh@10 -- # set +x 00:10:40.951 04:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.951 04:01:55 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:40.951 04:01:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.951 04:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.951 04:01:55 -- common/autotest_common.sh@10 -- # set +x 00:10:41.211 04:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:41.211 04:01:55 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:41.211 04:01:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.211 04:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:41.211 04:01:55 -- common/autotest_common.sh@10 -- # set +x 00:10:41.470 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.470 04:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:41.470 04:01:55 -- target/connect_stress.sh@34 -- # kill -0 213828 00:10:41.470 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (213828) - No such process 00:10:41.470 04:01:55 -- target/connect_stress.sh@38 -- # wait 213828 00:10:41.470 04:01:55 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:41.470 04:01:55 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:41.470 04:01:55 -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:41.470 04:01:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:41.470 04:01:55 -- nvmf/common.sh@117 -- # sync 00:10:41.470 04:01:55 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:41.470 04:01:55 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:41.470 04:01:55 -- nvmf/common.sh@120 -- # set +e 00:10:41.470 04:01:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:41.470 04:01:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:41.730 rmmod nvme_rdma 00:10:41.730 rmmod nvme_fabrics 00:10:41.730 04:01:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:41.730 04:01:56 -- nvmf/common.sh@124 -- # set -e 00:10:41.730 04:01:56 -- nvmf/common.sh@125 -- # return 0 00:10:41.730 04:01:56 -- nvmf/common.sh@478 -- # '[' -n 213682 ']' 00:10:41.730 04:01:56 -- nvmf/common.sh@479 -- # killprocess 213682 00:10:41.730 04:01:56 -- common/autotest_common.sh@936 -- # '[' -z 213682 ']' 00:10:41.730 04:01:56 -- common/autotest_common.sh@940 -- # kill -0 213682 00:10:41.730 04:01:56 -- common/autotest_common.sh@941 -- # uname 00:10:41.730 04:01:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:41.730 04:01:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 213682 00:10:41.730 04:01:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:41.730 04:01:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:41.730 04:01:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 213682' 00:10:41.730 killing process with pid 213682 00:10:41.730 04:01:56 -- common/autotest_common.sh@955 -- # kill 213682 00:10:41.730 04:01:56 -- common/autotest_common.sh@960 -- # wait 213682 00:10:41.990 04:01:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:41.990 04:01:56 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:41.990 00:10:41.990 real 0m16.880s 00:10:41.990 user 0m41.327s 00:10:41.990 sys 0m5.698s 00:10:41.990 04:01:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:41.990 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:10:41.990 ************************************ 00:10:41.990 END TEST nvmf_connect_stress 00:10:41.990 ************************************ 00:10:41.990 04:01:56 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:10:41.990 04:01:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:41.990 04:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:41.990 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:10:41.990 ************************************ 00:10:41.990 START TEST nvmf_fused_ordering 00:10:41.990 ************************************ 00:10:41.990 04:01:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:10:42.250 * Looking for test storage... 00:10:42.250 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:42.250 04:01:56 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.250 04:01:56 -- nvmf/common.sh@7 -- # uname -s 00:10:42.250 04:01:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.250 04:01:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.250 04:01:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.250 04:01:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.250 04:01:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.250 04:01:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.250 04:01:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.250 04:01:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.250 04:01:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.250 04:01:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.250 04:01:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:42.250 04:01:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:42.250 04:01:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.250 04:01:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.250 04:01:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.250 04:01:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.250 04:01:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:42.250 04:01:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.250 04:01:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.251 04:01:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.251 04:01:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.251 04:01:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.251 04:01:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.251 04:01:56 -- paths/export.sh@5 -- # export PATH 00:10:42.251 04:01:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.251 04:01:56 -- nvmf/common.sh@47 -- # : 0 00:10:42.251 04:01:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.251 04:01:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.251 04:01:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.251 04:01:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.251 04:01:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.251 04:01:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.251 04:01:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.251 04:01:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.251 04:01:56 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:42.251 04:01:56 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:42.251 04:01:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.251 04:01:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:42.251 04:01:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:42.251 04:01:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:42.251 04:01:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.251 04:01:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.251 04:01:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.251 04:01:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:42.251 04:01:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:42.251 04:01:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:42.251 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:10:47.531 04:02:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:47.531 04:02:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:47.531 04:02:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:47.531 04:02:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:47.531 04:02:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:47.531 04:02:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:47.531 04:02:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:47.531 04:02:01 -- nvmf/common.sh@295 -- # net_devs=() 00:10:47.531 04:02:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:47.531 04:02:01 -- nvmf/common.sh@296 -- # e810=() 00:10:47.531 04:02:01 -- nvmf/common.sh@296 -- # local -ga e810 00:10:47.531 04:02:01 -- nvmf/common.sh@297 -- # x722=() 00:10:47.531 04:02:01 -- nvmf/common.sh@297 -- # local -ga x722 00:10:47.531 04:02:01 -- nvmf/common.sh@298 -- # mlx=() 00:10:47.531 04:02:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:47.531 04:02:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.531 04:02:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.531 04:02:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.531 04:02:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.531 04:02:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.531 04:02:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.531 04:02:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.531 04:02:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.531 04:02:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.531 04:02:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.531 04:02:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.531 04:02:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:47.531 04:02:01 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:47.531 04:02:01 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:47.531 04:02:01 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:47.531 04:02:01 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:47.531 04:02:01 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:47.531 04:02:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:47.531 04:02:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.531 04:02:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:47.531 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:47.531 04:02:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:47.531 04:02:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:47.531 04:02:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:47.531 04:02:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:47.531 04:02:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:47.531 04:02:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:47.531 04:02:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.531 04:02:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:47.531 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:47.531 04:02:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:47.531 04:02:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:47.531 04:02:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:47.531 04:02:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:47.532 04:02:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:47.532 04:02:01 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.532 04:02:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:47.532 04:02:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.532 04:02:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:47.532 Found net devices under 0000:18:00.0: mlx_0_0 00:10:47.532 04:02:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.532 04:02:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.532 04:02:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:47.532 04:02:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.532 04:02:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:47.532 Found net devices under 0000:18:00.1: mlx_0_1 00:10:47.532 04:02:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.532 04:02:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:47.532 04:02:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:47.532 04:02:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:47.532 04:02:01 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:47.532 04:02:01 -- nvmf/common.sh@58 -- # uname 00:10:47.532 04:02:01 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:47.532 04:02:01 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:47.532 04:02:01 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:47.532 04:02:01 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:47.532 04:02:01 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:47.532 04:02:01 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:47.532 04:02:01 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:47.532 04:02:01 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:47.532 04:02:01 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:47.532 04:02:01 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:47.532 04:02:01 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:47.532 04:02:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:47.532 04:02:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:47.532 04:02:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:47.532 04:02:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:47.532 04:02:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:47.532 04:02:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:47.532 04:02:01 -- nvmf/common.sh@105 -- # continue 2 00:10:47.532 04:02:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:47.532 04:02:01 -- nvmf/common.sh@105 -- # continue 2 00:10:47.532 04:02:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:47.532 04:02:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:47.532 04:02:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:47.532 04:02:01 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:47.532 04:02:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:47.532 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:47.532 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:47.532 altname enp24s0f0np0 00:10:47.532 altname ens785f0np0 00:10:47.532 inet 192.168.100.8/24 scope global mlx_0_0 00:10:47.532 valid_lft forever preferred_lft forever 00:10:47.532 04:02:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:47.532 04:02:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:47.532 04:02:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:47.532 04:02:01 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:47.532 04:02:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:47.532 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:47.532 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:47.532 altname enp24s0f1np1 00:10:47.532 altname ens785f1np1 00:10:47.532 inet 192.168.100.9/24 scope global mlx_0_1 00:10:47.532 valid_lft forever preferred_lft forever 00:10:47.532 04:02:01 -- nvmf/common.sh@411 -- # return 0 00:10:47.532 04:02:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:47.532 04:02:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:47.532 04:02:01 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:47.532 04:02:01 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:47.532 04:02:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:47.532 04:02:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:47.532 04:02:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:47.532 04:02:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:47.532 04:02:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:47.532 04:02:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:47.532 04:02:01 -- nvmf/common.sh@105 -- # continue 2 00:10:47.532 04:02:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:47.532 04:02:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:47.532 04:02:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:47.532 04:02:01 -- nvmf/common.sh@105 -- # continue 2 00:10:47.532 04:02:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:47.532 04:02:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:47.532 04:02:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:47.532 04:02:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:47.532 04:02:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:47.532 04:02:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:47.532 04:02:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:47.532 04:02:01 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:47.532 192.168.100.9' 00:10:47.532 04:02:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:47.532 192.168.100.9' 00:10:47.532 04:02:01 -- nvmf/common.sh@446 -- # head -n 1 00:10:47.532 04:02:01 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:47.532 04:02:01 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:47.532 192.168.100.9' 00:10:47.532 04:02:01 -- nvmf/common.sh@447 -- # tail -n +2 00:10:47.532 04:02:01 -- nvmf/common.sh@447 -- # head -n 1 00:10:47.532 04:02:01 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:47.532 04:02:01 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:47.532 04:02:01 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:47.532 04:02:01 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:47.532 04:02:01 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:47.532 04:02:01 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:47.532 04:02:01 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:47.532 04:02:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:47.532 04:02:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:47.532 04:02:01 -- common/autotest_common.sh@10 -- # set +x 00:10:47.532 04:02:01 -- nvmf/common.sh@470 -- # nvmfpid=219023 00:10:47.532 04:02:01 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:47.532 04:02:01 -- nvmf/common.sh@471 -- # waitforlisten 219023 00:10:47.532 04:02:01 -- common/autotest_common.sh@817 -- # '[' -z 219023 ']' 00:10:47.532 04:02:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.532 04:02:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:47.532 04:02:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.532 04:02:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:47.532 04:02:01 -- common/autotest_common.sh@10 -- # set +x 00:10:47.532 [2024-04-19 04:02:02.013328] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:10:47.532 [2024-04-19 04:02:02.013368] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.532 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.791 [2024-04-19 04:02:02.062385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.791 [2024-04-19 04:02:02.133330] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.791 [2024-04-19 04:02:02.133365] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.791 [2024-04-19 04:02:02.133371] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.791 [2024-04-19 04:02:02.133377] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.791 [2024-04-19 04:02:02.133382] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.791 [2024-04-19 04:02:02.133396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.359 04:02:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:48.359 04:02:02 -- common/autotest_common.sh@850 -- # return 0 00:10:48.359 04:02:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:48.359 04:02:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:48.359 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.359 04:02:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.359 04:02:02 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:48.359 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.359 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.359 [2024-04-19 04:02:02.840279] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17fc830/0x1800d20) succeed. 00:10:48.359 [2024-04-19 04:02:02.848089] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17fdd30/0x18423b0) succeed. 00:10:48.619 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.619 04:02:02 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:48.619 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.619 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.619 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.619 04:02:02 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:48.619 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.619 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.619 [2024-04-19 04:02:02.902211] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:48.620 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.620 04:02:02 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:48.620 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.620 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.620 NULL1 00:10:48.620 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.620 04:02:02 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:48.620 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.620 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.620 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.620 04:02:02 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:48.620 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.620 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.620 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.620 04:02:02 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:48.620 [2024-04-19 04:02:02.955272] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:10:48.620 [2024-04-19 04:02:02.955313] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219104 ] 00:10:48.620 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.620 Attached to nqn.2016-06.io.spdk:cnode1 00:10:48.620 Namespace ID: 1 size: 1GB 00:10:48.620 fused_ordering(0) 00:10:48.620 fused_ordering(1) 00:10:48.620 fused_ordering(2) 00:10:48.620 fused_ordering(3) 00:10:48.620 fused_ordering(4) 00:10:48.620 fused_ordering(5) 00:10:48.620 fused_ordering(6) 00:10:48.620 fused_ordering(7) 00:10:48.620 fused_ordering(8) 00:10:48.620 fused_ordering(9) 00:10:48.620 fused_ordering(10) 00:10:48.620 fused_ordering(11) 00:10:48.620 fused_ordering(12) 00:10:48.620 fused_ordering(13) 00:10:48.620 fused_ordering(14) 00:10:48.620 fused_ordering(15) 00:10:48.620 fused_ordering(16) 00:10:48.620 fused_ordering(17) 00:10:48.620 fused_ordering(18) 00:10:48.620 fused_ordering(19) 00:10:48.620 fused_ordering(20) 00:10:48.620 fused_ordering(21) 00:10:48.620 fused_ordering(22) 00:10:48.620 fused_ordering(23) 00:10:48.620 fused_ordering(24) 00:10:48.620 fused_ordering(25) 00:10:48.620 fused_ordering(26) 00:10:48.620 fused_ordering(27) 00:10:48.620 fused_ordering(28) 00:10:48.620 fused_ordering(29) 00:10:48.620 fused_ordering(30) 00:10:48.620 fused_ordering(31) 00:10:48.620 fused_ordering(32) 00:10:48.620 fused_ordering(33) 00:10:48.620 fused_ordering(34) 00:10:48.620 fused_ordering(35) 00:10:48.620 fused_ordering(36) 00:10:48.620 fused_ordering(37) 00:10:48.620 fused_ordering(38) 00:10:48.620 fused_ordering(39) 00:10:48.620 fused_ordering(40) 00:10:48.620 fused_ordering(41) 00:10:48.620 fused_ordering(42) 00:10:48.620 fused_ordering(43) 00:10:48.620 fused_ordering(44) 00:10:48.620 fused_ordering(45) 00:10:48.620 fused_ordering(46) 00:10:48.620 fused_ordering(47) 00:10:48.620 fused_ordering(48) 00:10:48.620 fused_ordering(49) 00:10:48.620 fused_ordering(50) 00:10:48.620 fused_ordering(51) 00:10:48.620 fused_ordering(52) 00:10:48.620 fused_ordering(53) 00:10:48.620 fused_ordering(54) 00:10:48.620 fused_ordering(55) 00:10:48.620 fused_ordering(56) 00:10:48.620 fused_ordering(57) 00:10:48.620 fused_ordering(58) 00:10:48.620 fused_ordering(59) 00:10:48.620 fused_ordering(60) 00:10:48.620 fused_ordering(61) 00:10:48.620 fused_ordering(62) 00:10:48.620 fused_ordering(63) 00:10:48.620 fused_ordering(64) 00:10:48.620 fused_ordering(65) 00:10:48.620 fused_ordering(66) 00:10:48.620 fused_ordering(67) 00:10:48.620 fused_ordering(68) 00:10:48.620 fused_ordering(69) 00:10:48.620 fused_ordering(70) 00:10:48.620 fused_ordering(71) 00:10:48.620 fused_ordering(72) 00:10:48.620 fused_ordering(73) 00:10:48.620 fused_ordering(74) 00:10:48.620 fused_ordering(75) 00:10:48.620 fused_ordering(76) 00:10:48.620 fused_ordering(77) 00:10:48.620 fused_ordering(78) 00:10:48.620 fused_ordering(79) 00:10:48.620 fused_ordering(80) 00:10:48.620 fused_ordering(81) 00:10:48.620 fused_ordering(82) 00:10:48.620 fused_ordering(83) 00:10:48.620 fused_ordering(84) 00:10:48.620 fused_ordering(85) 00:10:48.620 fused_ordering(86) 00:10:48.620 fused_ordering(87) 00:10:48.620 fused_ordering(88) 00:10:48.620 fused_ordering(89) 00:10:48.620 fused_ordering(90) 00:10:48.620 fused_ordering(91) 00:10:48.620 fused_ordering(92) 00:10:48.620 fused_ordering(93) 00:10:48.620 fused_ordering(94) 00:10:48.620 fused_ordering(95) 00:10:48.620 fused_ordering(96) 00:10:48.620 fused_ordering(97) 00:10:48.620 fused_ordering(98) 00:10:48.620 fused_ordering(99) 00:10:48.620 fused_ordering(100) 00:10:48.620 fused_ordering(101) 00:10:48.620 fused_ordering(102) 00:10:48.620 fused_ordering(103) 00:10:48.620 fused_ordering(104) 00:10:48.620 fused_ordering(105) 00:10:48.620 fused_ordering(106) 00:10:48.620 fused_ordering(107) 00:10:48.620 fused_ordering(108) 00:10:48.620 fused_ordering(109) 00:10:48.620 fused_ordering(110) 00:10:48.620 fused_ordering(111) 00:10:48.620 fused_ordering(112) 00:10:48.620 fused_ordering(113) 00:10:48.620 fused_ordering(114) 00:10:48.620 fused_ordering(115) 00:10:48.620 fused_ordering(116) 00:10:48.620 fused_ordering(117) 00:10:48.620 fused_ordering(118) 00:10:48.620 fused_ordering(119) 00:10:48.620 fused_ordering(120) 00:10:48.620 fused_ordering(121) 00:10:48.620 fused_ordering(122) 00:10:48.620 fused_ordering(123) 00:10:48.620 fused_ordering(124) 00:10:48.620 fused_ordering(125) 00:10:48.620 fused_ordering(126) 00:10:48.620 fused_ordering(127) 00:10:48.620 fused_ordering(128) 00:10:48.620 fused_ordering(129) 00:10:48.620 fused_ordering(130) 00:10:48.620 fused_ordering(131) 00:10:48.620 fused_ordering(132) 00:10:48.620 fused_ordering(133) 00:10:48.620 fused_ordering(134) 00:10:48.620 fused_ordering(135) 00:10:48.620 fused_ordering(136) 00:10:48.620 fused_ordering(137) 00:10:48.620 fused_ordering(138) 00:10:48.620 fused_ordering(139) 00:10:48.620 fused_ordering(140) 00:10:48.620 fused_ordering(141) 00:10:48.620 fused_ordering(142) 00:10:48.620 fused_ordering(143) 00:10:48.620 fused_ordering(144) 00:10:48.620 fused_ordering(145) 00:10:48.620 fused_ordering(146) 00:10:48.620 fused_ordering(147) 00:10:48.620 fused_ordering(148) 00:10:48.620 fused_ordering(149) 00:10:48.620 fused_ordering(150) 00:10:48.620 fused_ordering(151) 00:10:48.620 fused_ordering(152) 00:10:48.620 fused_ordering(153) 00:10:48.620 fused_ordering(154) 00:10:48.620 fused_ordering(155) 00:10:48.620 fused_ordering(156) 00:10:48.620 fused_ordering(157) 00:10:48.620 fused_ordering(158) 00:10:48.620 fused_ordering(159) 00:10:48.620 fused_ordering(160) 00:10:48.620 fused_ordering(161) 00:10:48.620 fused_ordering(162) 00:10:48.620 fused_ordering(163) 00:10:48.620 fused_ordering(164) 00:10:48.620 fused_ordering(165) 00:10:48.620 fused_ordering(166) 00:10:48.620 fused_ordering(167) 00:10:48.620 fused_ordering(168) 00:10:48.620 fused_ordering(169) 00:10:48.620 fused_ordering(170) 00:10:48.620 fused_ordering(171) 00:10:48.620 fused_ordering(172) 00:10:48.620 fused_ordering(173) 00:10:48.620 fused_ordering(174) 00:10:48.620 fused_ordering(175) 00:10:48.620 fused_ordering(176) 00:10:48.620 fused_ordering(177) 00:10:48.620 fused_ordering(178) 00:10:48.620 fused_ordering(179) 00:10:48.620 fused_ordering(180) 00:10:48.620 fused_ordering(181) 00:10:48.620 fused_ordering(182) 00:10:48.620 fused_ordering(183) 00:10:48.620 fused_ordering(184) 00:10:48.620 fused_ordering(185) 00:10:48.620 fused_ordering(186) 00:10:48.620 fused_ordering(187) 00:10:48.620 fused_ordering(188) 00:10:48.620 fused_ordering(189) 00:10:48.620 fused_ordering(190) 00:10:48.620 fused_ordering(191) 00:10:48.620 fused_ordering(192) 00:10:48.620 fused_ordering(193) 00:10:48.620 fused_ordering(194) 00:10:48.620 fused_ordering(195) 00:10:48.620 fused_ordering(196) 00:10:48.620 fused_ordering(197) 00:10:48.620 fused_ordering(198) 00:10:48.620 fused_ordering(199) 00:10:48.620 fused_ordering(200) 00:10:48.620 fused_ordering(201) 00:10:48.620 fused_ordering(202) 00:10:48.620 fused_ordering(203) 00:10:48.620 fused_ordering(204) 00:10:48.620 fused_ordering(205) 00:10:48.881 fused_ordering(206) 00:10:48.882 fused_ordering(207) 00:10:48.882 fused_ordering(208) 00:10:48.882 fused_ordering(209) 00:10:48.882 fused_ordering(210) 00:10:48.882 fused_ordering(211) 00:10:48.882 fused_ordering(212) 00:10:48.882 fused_ordering(213) 00:10:48.882 fused_ordering(214) 00:10:48.882 fused_ordering(215) 00:10:48.882 fused_ordering(216) 00:10:48.882 fused_ordering(217) 00:10:48.882 fused_ordering(218) 00:10:48.882 fused_ordering(219) 00:10:48.882 fused_ordering(220) 00:10:48.882 fused_ordering(221) 00:10:48.882 fused_ordering(222) 00:10:48.882 fused_ordering(223) 00:10:48.882 fused_ordering(224) 00:10:48.882 fused_ordering(225) 00:10:48.882 fused_ordering(226) 00:10:48.882 fused_ordering(227) 00:10:48.882 fused_ordering(228) 00:10:48.882 fused_ordering(229) 00:10:48.882 fused_ordering(230) 00:10:48.882 fused_ordering(231) 00:10:48.882 fused_ordering(232) 00:10:48.882 fused_ordering(233) 00:10:48.882 fused_ordering(234) 00:10:48.882 fused_ordering(235) 00:10:48.882 fused_ordering(236) 00:10:48.882 fused_ordering(237) 00:10:48.882 fused_ordering(238) 00:10:48.882 fused_ordering(239) 00:10:48.882 fused_ordering(240) 00:10:48.882 fused_ordering(241) 00:10:48.882 fused_ordering(242) 00:10:48.882 fused_ordering(243) 00:10:48.882 fused_ordering(244) 00:10:48.882 fused_ordering(245) 00:10:48.882 fused_ordering(246) 00:10:48.882 fused_ordering(247) 00:10:48.882 fused_ordering(248) 00:10:48.882 fused_ordering(249) 00:10:48.882 fused_ordering(250) 00:10:48.882 fused_ordering(251) 00:10:48.882 fused_ordering(252) 00:10:48.882 fused_ordering(253) 00:10:48.882 fused_ordering(254) 00:10:48.882 fused_ordering(255) 00:10:48.882 fused_ordering(256) 00:10:48.882 fused_ordering(257) 00:10:48.882 fused_ordering(258) 00:10:48.882 fused_ordering(259) 00:10:48.882 fused_ordering(260) 00:10:48.882 fused_ordering(261) 00:10:48.882 fused_ordering(262) 00:10:48.882 fused_ordering(263) 00:10:48.882 fused_ordering(264) 00:10:48.882 fused_ordering(265) 00:10:48.882 fused_ordering(266) 00:10:48.882 fused_ordering(267) 00:10:48.882 fused_ordering(268) 00:10:48.882 fused_ordering(269) 00:10:48.882 fused_ordering(270) 00:10:48.882 fused_ordering(271) 00:10:48.882 fused_ordering(272) 00:10:48.882 fused_ordering(273) 00:10:48.882 fused_ordering(274) 00:10:48.882 fused_ordering(275) 00:10:48.882 fused_ordering(276) 00:10:48.882 fused_ordering(277) 00:10:48.882 fused_ordering(278) 00:10:48.882 fused_ordering(279) 00:10:48.882 fused_ordering(280) 00:10:48.882 fused_ordering(281) 00:10:48.882 fused_ordering(282) 00:10:48.882 fused_ordering(283) 00:10:48.882 fused_ordering(284) 00:10:48.882 fused_ordering(285) 00:10:48.882 fused_ordering(286) 00:10:48.882 fused_ordering(287) 00:10:48.882 fused_ordering(288) 00:10:48.882 fused_ordering(289) 00:10:48.882 fused_ordering(290) 00:10:48.882 fused_ordering(291) 00:10:48.882 fused_ordering(292) 00:10:48.882 fused_ordering(293) 00:10:48.882 fused_ordering(294) 00:10:48.882 fused_ordering(295) 00:10:48.882 fused_ordering(296) 00:10:48.882 fused_ordering(297) 00:10:48.882 fused_ordering(298) 00:10:48.882 fused_ordering(299) 00:10:48.882 fused_ordering(300) 00:10:48.882 fused_ordering(301) 00:10:48.882 fused_ordering(302) 00:10:48.882 fused_ordering(303) 00:10:48.882 fused_ordering(304) 00:10:48.882 fused_ordering(305) 00:10:48.882 fused_ordering(306) 00:10:48.882 fused_ordering(307) 00:10:48.882 fused_ordering(308) 00:10:48.882 fused_ordering(309) 00:10:48.882 fused_ordering(310) 00:10:48.882 fused_ordering(311) 00:10:48.882 fused_ordering(312) 00:10:48.882 fused_ordering(313) 00:10:48.882 fused_ordering(314) 00:10:48.882 fused_ordering(315) 00:10:48.882 fused_ordering(316) 00:10:48.882 fused_ordering(317) 00:10:48.882 fused_ordering(318) 00:10:48.882 fused_ordering(319) 00:10:48.882 fused_ordering(320) 00:10:48.882 fused_ordering(321) 00:10:48.882 fused_ordering(322) 00:10:48.882 fused_ordering(323) 00:10:48.882 fused_ordering(324) 00:10:48.882 fused_ordering(325) 00:10:48.882 fused_ordering(326) 00:10:48.882 fused_ordering(327) 00:10:48.882 fused_ordering(328) 00:10:48.882 fused_ordering(329) 00:10:48.882 fused_ordering(330) 00:10:48.882 fused_ordering(331) 00:10:48.882 fused_ordering(332) 00:10:48.882 fused_ordering(333) 00:10:48.882 fused_ordering(334) 00:10:48.882 fused_ordering(335) 00:10:48.882 fused_ordering(336) 00:10:48.882 fused_ordering(337) 00:10:48.882 fused_ordering(338) 00:10:48.882 fused_ordering(339) 00:10:48.882 fused_ordering(340) 00:10:48.882 fused_ordering(341) 00:10:48.882 fused_ordering(342) 00:10:48.882 fused_ordering(343) 00:10:48.882 fused_ordering(344) 00:10:48.882 fused_ordering(345) 00:10:48.882 fused_ordering(346) 00:10:48.882 fused_ordering(347) 00:10:48.882 fused_ordering(348) 00:10:48.882 fused_ordering(349) 00:10:48.882 fused_ordering(350) 00:10:48.882 fused_ordering(351) 00:10:48.882 fused_ordering(352) 00:10:48.882 fused_ordering(353) 00:10:48.882 fused_ordering(354) 00:10:48.882 fused_ordering(355) 00:10:48.882 fused_ordering(356) 00:10:48.882 fused_ordering(357) 00:10:48.882 fused_ordering(358) 00:10:48.882 fused_ordering(359) 00:10:48.882 fused_ordering(360) 00:10:48.882 fused_ordering(361) 00:10:48.882 fused_ordering(362) 00:10:48.882 fused_ordering(363) 00:10:48.882 fused_ordering(364) 00:10:48.882 fused_ordering(365) 00:10:48.882 fused_ordering(366) 00:10:48.882 fused_ordering(367) 00:10:48.882 fused_ordering(368) 00:10:48.882 fused_ordering(369) 00:10:48.882 fused_ordering(370) 00:10:48.882 fused_ordering(371) 00:10:48.882 fused_ordering(372) 00:10:48.882 fused_ordering(373) 00:10:48.882 fused_ordering(374) 00:10:48.882 fused_ordering(375) 00:10:48.882 fused_ordering(376) 00:10:48.882 fused_ordering(377) 00:10:48.882 fused_ordering(378) 00:10:48.882 fused_ordering(379) 00:10:48.882 fused_ordering(380) 00:10:48.882 fused_ordering(381) 00:10:48.882 fused_ordering(382) 00:10:48.882 fused_ordering(383) 00:10:48.882 fused_ordering(384) 00:10:48.882 fused_ordering(385) 00:10:48.882 fused_ordering(386) 00:10:48.882 fused_ordering(387) 00:10:48.882 fused_ordering(388) 00:10:48.882 fused_ordering(389) 00:10:48.882 fused_ordering(390) 00:10:48.882 fused_ordering(391) 00:10:48.882 fused_ordering(392) 00:10:48.882 fused_ordering(393) 00:10:48.882 fused_ordering(394) 00:10:48.882 fused_ordering(395) 00:10:48.882 fused_ordering(396) 00:10:48.882 fused_ordering(397) 00:10:48.882 fused_ordering(398) 00:10:48.882 fused_ordering(399) 00:10:48.882 fused_ordering(400) 00:10:48.882 fused_ordering(401) 00:10:48.882 fused_ordering(402) 00:10:48.882 fused_ordering(403) 00:10:48.882 fused_ordering(404) 00:10:48.882 fused_ordering(405) 00:10:48.882 fused_ordering(406) 00:10:48.882 fused_ordering(407) 00:10:48.882 fused_ordering(408) 00:10:48.882 fused_ordering(409) 00:10:48.882 fused_ordering(410) 00:10:48.882 fused_ordering(411) 00:10:48.882 fused_ordering(412) 00:10:48.882 fused_ordering(413) 00:10:48.882 fused_ordering(414) 00:10:48.882 fused_ordering(415) 00:10:48.882 fused_ordering(416) 00:10:48.882 fused_ordering(417) 00:10:48.882 fused_ordering(418) 00:10:48.882 fused_ordering(419) 00:10:48.882 fused_ordering(420) 00:10:48.882 fused_ordering(421) 00:10:48.882 fused_ordering(422) 00:10:48.882 fused_ordering(423) 00:10:48.882 fused_ordering(424) 00:10:48.882 fused_ordering(425) 00:10:48.882 fused_ordering(426) 00:10:48.882 fused_ordering(427) 00:10:48.882 fused_ordering(428) 00:10:48.882 fused_ordering(429) 00:10:48.882 fused_ordering(430) 00:10:48.882 fused_ordering(431) 00:10:48.882 fused_ordering(432) 00:10:48.882 fused_ordering(433) 00:10:48.882 fused_ordering(434) 00:10:48.882 fused_ordering(435) 00:10:48.882 fused_ordering(436) 00:10:48.882 fused_ordering(437) 00:10:48.882 fused_ordering(438) 00:10:48.882 fused_ordering(439) 00:10:48.882 fused_ordering(440) 00:10:48.882 fused_ordering(441) 00:10:48.882 fused_ordering(442) 00:10:48.882 fused_ordering(443) 00:10:48.882 fused_ordering(444) 00:10:48.882 fused_ordering(445) 00:10:48.882 fused_ordering(446) 00:10:48.882 fused_ordering(447) 00:10:48.882 fused_ordering(448) 00:10:48.882 fused_ordering(449) 00:10:48.882 fused_ordering(450) 00:10:48.882 fused_ordering(451) 00:10:48.882 fused_ordering(452) 00:10:48.882 fused_ordering(453) 00:10:48.882 fused_ordering(454) 00:10:48.882 fused_ordering(455) 00:10:48.882 fused_ordering(456) 00:10:48.882 fused_ordering(457) 00:10:48.882 fused_ordering(458) 00:10:48.882 fused_ordering(459) 00:10:48.882 fused_ordering(460) 00:10:48.882 fused_ordering(461) 00:10:48.882 fused_ordering(462) 00:10:48.882 fused_ordering(463) 00:10:48.882 fused_ordering(464) 00:10:48.882 fused_ordering(465) 00:10:48.882 fused_ordering(466) 00:10:48.882 fused_ordering(467) 00:10:48.882 fused_ordering(468) 00:10:48.882 fused_ordering(469) 00:10:48.882 fused_ordering(470) 00:10:48.882 fused_ordering(471) 00:10:48.882 fused_ordering(472) 00:10:48.882 fused_ordering(473) 00:10:48.882 fused_ordering(474) 00:10:48.882 fused_ordering(475) 00:10:48.882 fused_ordering(476) 00:10:48.882 fused_ordering(477) 00:10:48.882 fused_ordering(478) 00:10:48.882 fused_ordering(479) 00:10:48.882 fused_ordering(480) 00:10:48.882 fused_ordering(481) 00:10:48.882 fused_ordering(482) 00:10:48.883 fused_ordering(483) 00:10:48.883 fused_ordering(484) 00:10:48.883 fused_ordering(485) 00:10:48.883 fused_ordering(486) 00:10:48.883 fused_ordering(487) 00:10:48.883 fused_ordering(488) 00:10:48.883 fused_ordering(489) 00:10:48.883 fused_ordering(490) 00:10:48.883 fused_ordering(491) 00:10:48.883 fused_ordering(492) 00:10:48.883 fused_ordering(493) 00:10:48.883 fused_ordering(494) 00:10:48.883 fused_ordering(495) 00:10:48.883 fused_ordering(496) 00:10:48.883 fused_ordering(497) 00:10:48.883 fused_ordering(498) 00:10:48.883 fused_ordering(499) 00:10:48.883 fused_ordering(500) 00:10:48.883 fused_ordering(501) 00:10:48.883 fused_ordering(502) 00:10:48.883 fused_ordering(503) 00:10:48.883 fused_ordering(504) 00:10:48.883 fused_ordering(505) 00:10:48.883 fused_ordering(506) 00:10:48.883 fused_ordering(507) 00:10:48.883 fused_ordering(508) 00:10:48.883 fused_ordering(509) 00:10:48.883 fused_ordering(510) 00:10:48.883 fused_ordering(511) 00:10:48.883 fused_ordering(512) 00:10:48.883 fused_ordering(513) 00:10:48.883 fused_ordering(514) 00:10:48.883 fused_ordering(515) 00:10:48.883 fused_ordering(516) 00:10:48.883 fused_ordering(517) 00:10:48.883 fused_ordering(518) 00:10:48.883 fused_ordering(519) 00:10:48.883 fused_ordering(520) 00:10:48.883 fused_ordering(521) 00:10:48.883 fused_ordering(522) 00:10:48.883 fused_ordering(523) 00:10:48.883 fused_ordering(524) 00:10:48.883 fused_ordering(525) 00:10:48.883 fused_ordering(526) 00:10:48.883 fused_ordering(527) 00:10:48.883 fused_ordering(528) 00:10:48.883 fused_ordering(529) 00:10:48.883 fused_ordering(530) 00:10:48.883 fused_ordering(531) 00:10:48.883 fused_ordering(532) 00:10:48.883 fused_ordering(533) 00:10:48.883 fused_ordering(534) 00:10:48.883 fused_ordering(535) 00:10:48.883 fused_ordering(536) 00:10:48.883 fused_ordering(537) 00:10:48.883 fused_ordering(538) 00:10:48.883 fused_ordering(539) 00:10:48.883 fused_ordering(540) 00:10:48.883 fused_ordering(541) 00:10:48.883 fused_ordering(542) 00:10:48.883 fused_ordering(543) 00:10:48.883 fused_ordering(544) 00:10:48.883 fused_ordering(545) 00:10:48.883 fused_ordering(546) 00:10:48.883 fused_ordering(547) 00:10:48.883 fused_ordering(548) 00:10:48.883 fused_ordering(549) 00:10:48.883 fused_ordering(550) 00:10:48.883 fused_ordering(551) 00:10:48.883 fused_ordering(552) 00:10:48.883 fused_ordering(553) 00:10:48.883 fused_ordering(554) 00:10:48.883 fused_ordering(555) 00:10:48.883 fused_ordering(556) 00:10:48.883 fused_ordering(557) 00:10:48.883 fused_ordering(558) 00:10:48.883 fused_ordering(559) 00:10:48.883 fused_ordering(560) 00:10:48.883 fused_ordering(561) 00:10:48.883 fused_ordering(562) 00:10:48.883 fused_ordering(563) 00:10:48.883 fused_ordering(564) 00:10:48.883 fused_ordering(565) 00:10:48.883 fused_ordering(566) 00:10:48.883 fused_ordering(567) 00:10:48.883 fused_ordering(568) 00:10:48.883 fused_ordering(569) 00:10:48.883 fused_ordering(570) 00:10:48.883 fused_ordering(571) 00:10:48.883 fused_ordering(572) 00:10:48.883 fused_ordering(573) 00:10:48.883 fused_ordering(574) 00:10:48.883 fused_ordering(575) 00:10:48.883 fused_ordering(576) 00:10:48.883 fused_ordering(577) 00:10:48.883 fused_ordering(578) 00:10:48.883 fused_ordering(579) 00:10:48.883 fused_ordering(580) 00:10:48.883 fused_ordering(581) 00:10:48.883 fused_ordering(582) 00:10:48.883 fused_ordering(583) 00:10:48.883 fused_ordering(584) 00:10:48.883 fused_ordering(585) 00:10:48.883 fused_ordering(586) 00:10:48.883 fused_ordering(587) 00:10:48.883 fused_ordering(588) 00:10:48.883 fused_ordering(589) 00:10:48.883 fused_ordering(590) 00:10:48.883 fused_ordering(591) 00:10:48.883 fused_ordering(592) 00:10:48.883 fused_ordering(593) 00:10:48.883 fused_ordering(594) 00:10:48.883 fused_ordering(595) 00:10:48.883 fused_ordering(596) 00:10:48.883 fused_ordering(597) 00:10:48.883 fused_ordering(598) 00:10:48.883 fused_ordering(599) 00:10:48.883 fused_ordering(600) 00:10:48.883 fused_ordering(601) 00:10:48.883 fused_ordering(602) 00:10:48.883 fused_ordering(603) 00:10:48.883 fused_ordering(604) 00:10:48.883 fused_ordering(605) 00:10:48.883 fused_ordering(606) 00:10:48.883 fused_ordering(607) 00:10:48.883 fused_ordering(608) 00:10:48.883 fused_ordering(609) 00:10:48.883 fused_ordering(610) 00:10:48.883 fused_ordering(611) 00:10:48.883 fused_ordering(612) 00:10:48.883 fused_ordering(613) 00:10:48.883 fused_ordering(614) 00:10:48.883 fused_ordering(615) 00:10:48.883 fused_ordering(616) 00:10:48.883 fused_ordering(617) 00:10:48.883 fused_ordering(618) 00:10:48.883 fused_ordering(619) 00:10:48.883 fused_ordering(620) 00:10:48.883 fused_ordering(621) 00:10:48.883 fused_ordering(622) 00:10:48.883 fused_ordering(623) 00:10:48.883 fused_ordering(624) 00:10:48.883 fused_ordering(625) 00:10:48.883 fused_ordering(626) 00:10:48.883 fused_ordering(627) 00:10:48.883 fused_ordering(628) 00:10:48.883 fused_ordering(629) 00:10:48.883 fused_ordering(630) 00:10:48.883 fused_ordering(631) 00:10:48.883 fused_ordering(632) 00:10:48.883 fused_ordering(633) 00:10:48.883 fused_ordering(634) 00:10:48.883 fused_ordering(635) 00:10:48.883 fused_ordering(636) 00:10:48.883 fused_ordering(637) 00:10:48.883 fused_ordering(638) 00:10:48.883 fused_ordering(639) 00:10:48.883 fused_ordering(640) 00:10:48.883 fused_ordering(641) 00:10:48.883 fused_ordering(642) 00:10:48.883 fused_ordering(643) 00:10:48.883 fused_ordering(644) 00:10:48.883 fused_ordering(645) 00:10:48.883 fused_ordering(646) 00:10:48.883 fused_ordering(647) 00:10:48.883 fused_ordering(648) 00:10:48.883 fused_ordering(649) 00:10:48.883 fused_ordering(650) 00:10:48.883 fused_ordering(651) 00:10:48.883 fused_ordering(652) 00:10:48.883 fused_ordering(653) 00:10:48.883 fused_ordering(654) 00:10:48.883 fused_ordering(655) 00:10:48.883 fused_ordering(656) 00:10:48.883 fused_ordering(657) 00:10:48.883 fused_ordering(658) 00:10:48.883 fused_ordering(659) 00:10:48.883 fused_ordering(660) 00:10:48.883 fused_ordering(661) 00:10:48.883 fused_ordering(662) 00:10:48.883 fused_ordering(663) 00:10:48.883 fused_ordering(664) 00:10:48.883 fused_ordering(665) 00:10:48.883 fused_ordering(666) 00:10:48.883 fused_ordering(667) 00:10:48.883 fused_ordering(668) 00:10:48.883 fused_ordering(669) 00:10:48.883 fused_ordering(670) 00:10:48.883 fused_ordering(671) 00:10:48.883 fused_ordering(672) 00:10:48.883 fused_ordering(673) 00:10:48.883 fused_ordering(674) 00:10:48.883 fused_ordering(675) 00:10:48.883 fused_ordering(676) 00:10:48.883 fused_ordering(677) 00:10:48.883 fused_ordering(678) 00:10:48.883 fused_ordering(679) 00:10:48.883 fused_ordering(680) 00:10:48.883 fused_ordering(681) 00:10:48.883 fused_ordering(682) 00:10:48.883 fused_ordering(683) 00:10:48.883 fused_ordering(684) 00:10:48.883 fused_ordering(685) 00:10:48.883 fused_ordering(686) 00:10:48.883 fused_ordering(687) 00:10:48.883 fused_ordering(688) 00:10:48.883 fused_ordering(689) 00:10:48.883 fused_ordering(690) 00:10:48.883 fused_ordering(691) 00:10:48.883 fused_ordering(692) 00:10:48.883 fused_ordering(693) 00:10:48.883 fused_ordering(694) 00:10:48.883 fused_ordering(695) 00:10:48.883 fused_ordering(696) 00:10:48.883 fused_ordering(697) 00:10:48.883 fused_ordering(698) 00:10:48.883 fused_ordering(699) 00:10:48.883 fused_ordering(700) 00:10:48.883 fused_ordering(701) 00:10:48.883 fused_ordering(702) 00:10:48.883 fused_ordering(703) 00:10:48.883 fused_ordering(704) 00:10:48.883 fused_ordering(705) 00:10:48.883 fused_ordering(706) 00:10:48.883 fused_ordering(707) 00:10:48.883 fused_ordering(708) 00:10:48.883 fused_ordering(709) 00:10:48.883 fused_ordering(710) 00:10:48.883 fused_ordering(711) 00:10:48.883 fused_ordering(712) 00:10:48.883 fused_ordering(713) 00:10:48.883 fused_ordering(714) 00:10:48.883 fused_ordering(715) 00:10:48.883 fused_ordering(716) 00:10:48.883 fused_ordering(717) 00:10:48.883 fused_ordering(718) 00:10:48.883 fused_ordering(719) 00:10:48.883 fused_ordering(720) 00:10:48.883 fused_ordering(721) 00:10:48.883 fused_ordering(722) 00:10:48.883 fused_ordering(723) 00:10:48.883 fused_ordering(724) 00:10:48.883 fused_ordering(725) 00:10:48.883 fused_ordering(726) 00:10:48.883 fused_ordering(727) 00:10:48.883 fused_ordering(728) 00:10:48.883 fused_ordering(729) 00:10:48.883 fused_ordering(730) 00:10:48.883 fused_ordering(731) 00:10:48.883 fused_ordering(732) 00:10:48.883 fused_ordering(733) 00:10:48.883 fused_ordering(734) 00:10:48.883 fused_ordering(735) 00:10:48.883 fused_ordering(736) 00:10:48.883 fused_ordering(737) 00:10:48.883 fused_ordering(738) 00:10:48.883 fused_ordering(739) 00:10:48.883 fused_ordering(740) 00:10:48.883 fused_ordering(741) 00:10:48.883 fused_ordering(742) 00:10:48.883 fused_ordering(743) 00:10:48.883 fused_ordering(744) 00:10:48.883 fused_ordering(745) 00:10:48.883 fused_ordering(746) 00:10:48.883 fused_ordering(747) 00:10:48.883 fused_ordering(748) 00:10:48.883 fused_ordering(749) 00:10:48.883 fused_ordering(750) 00:10:48.883 fused_ordering(751) 00:10:48.883 fused_ordering(752) 00:10:48.883 fused_ordering(753) 00:10:48.883 fused_ordering(754) 00:10:48.883 fused_ordering(755) 00:10:48.883 fused_ordering(756) 00:10:48.883 fused_ordering(757) 00:10:48.883 fused_ordering(758) 00:10:48.883 fused_ordering(759) 00:10:48.883 fused_ordering(760) 00:10:48.884 fused_ordering(761) 00:10:48.884 fused_ordering(762) 00:10:48.884 fused_ordering(763) 00:10:48.884 fused_ordering(764) 00:10:48.884 fused_ordering(765) 00:10:48.884 fused_ordering(766) 00:10:48.884 fused_ordering(767) 00:10:48.884 fused_ordering(768) 00:10:48.884 fused_ordering(769) 00:10:48.884 fused_ordering(770) 00:10:48.884 fused_ordering(771) 00:10:48.884 fused_ordering(772) 00:10:48.884 fused_ordering(773) 00:10:48.884 fused_ordering(774) 00:10:48.884 fused_ordering(775) 00:10:48.884 fused_ordering(776) 00:10:48.884 fused_ordering(777) 00:10:48.884 fused_ordering(778) 00:10:48.884 fused_ordering(779) 00:10:48.884 fused_ordering(780) 00:10:48.884 fused_ordering(781) 00:10:48.884 fused_ordering(782) 00:10:48.884 fused_ordering(783) 00:10:48.884 fused_ordering(784) 00:10:48.884 fused_ordering(785) 00:10:48.884 fused_ordering(786) 00:10:48.884 fused_ordering(787) 00:10:48.884 fused_ordering(788) 00:10:48.884 fused_ordering(789) 00:10:48.884 fused_ordering(790) 00:10:48.884 fused_ordering(791) 00:10:48.884 fused_ordering(792) 00:10:48.884 fused_ordering(793) 00:10:48.884 fused_ordering(794) 00:10:48.884 fused_ordering(795) 00:10:48.884 fused_ordering(796) 00:10:48.884 fused_ordering(797) 00:10:48.884 fused_ordering(798) 00:10:48.884 fused_ordering(799) 00:10:48.884 fused_ordering(800) 00:10:48.884 fused_ordering(801) 00:10:48.884 fused_ordering(802) 00:10:48.884 fused_ordering(803) 00:10:48.884 fused_ordering(804) 00:10:48.884 fused_ordering(805) 00:10:48.884 fused_ordering(806) 00:10:48.884 fused_ordering(807) 00:10:48.884 fused_ordering(808) 00:10:48.884 fused_ordering(809) 00:10:48.884 fused_ordering(810) 00:10:48.884 fused_ordering(811) 00:10:48.884 fused_ordering(812) 00:10:48.884 fused_ordering(813) 00:10:48.884 fused_ordering(814) 00:10:48.884 fused_ordering(815) 00:10:48.884 fused_ordering(816) 00:10:48.884 fused_ordering(817) 00:10:48.884 fused_ordering(818) 00:10:48.884 fused_ordering(819) 00:10:48.884 fused_ordering(820) 00:10:49.143 fused_ordering(821) 00:10:49.143 fused_ordering(822) 00:10:49.143 fused_ordering(823) 00:10:49.143 fused_ordering(824) 00:10:49.143 fused_ordering(825) 00:10:49.143 fused_ordering(826) 00:10:49.143 fused_ordering(827) 00:10:49.143 fused_ordering(828) 00:10:49.143 fused_ordering(829) 00:10:49.143 fused_ordering(830) 00:10:49.143 fused_ordering(831) 00:10:49.143 fused_ordering(832) 00:10:49.143 fused_ordering(833) 00:10:49.143 fused_ordering(834) 00:10:49.143 fused_ordering(835) 00:10:49.143 fused_ordering(836) 00:10:49.143 fused_ordering(837) 00:10:49.143 fused_ordering(838) 00:10:49.143 fused_ordering(839) 00:10:49.143 fused_ordering(840) 00:10:49.143 fused_ordering(841) 00:10:49.143 fused_ordering(842) 00:10:49.143 fused_ordering(843) 00:10:49.143 fused_ordering(844) 00:10:49.143 fused_ordering(845) 00:10:49.143 fused_ordering(846) 00:10:49.143 fused_ordering(847) 00:10:49.143 fused_ordering(848) 00:10:49.143 fused_ordering(849) 00:10:49.143 fused_ordering(850) 00:10:49.143 fused_ordering(851) 00:10:49.143 fused_ordering(852) 00:10:49.143 fused_ordering(853) 00:10:49.143 fused_ordering(854) 00:10:49.143 fused_ordering(855) 00:10:49.143 fused_ordering(856) 00:10:49.143 fused_ordering(857) 00:10:49.143 fused_ordering(858) 00:10:49.143 fused_ordering(859) 00:10:49.143 fused_ordering(860) 00:10:49.143 fused_ordering(861) 00:10:49.143 fused_ordering(862) 00:10:49.143 fused_ordering(863) 00:10:49.143 fused_ordering(864) 00:10:49.143 fused_ordering(865) 00:10:49.143 fused_ordering(866) 00:10:49.143 fused_ordering(867) 00:10:49.143 fused_ordering(868) 00:10:49.143 fused_ordering(869) 00:10:49.143 fused_ordering(870) 00:10:49.143 fused_ordering(871) 00:10:49.143 fused_ordering(872) 00:10:49.143 fused_ordering(873) 00:10:49.143 fused_ordering(874) 00:10:49.143 fused_ordering(875) 00:10:49.143 fused_ordering(876) 00:10:49.143 fused_ordering(877) 00:10:49.143 fused_ordering(878) 00:10:49.143 fused_ordering(879) 00:10:49.143 fused_ordering(880) 00:10:49.143 fused_ordering(881) 00:10:49.143 fused_ordering(882) 00:10:49.143 fused_ordering(883) 00:10:49.143 fused_ordering(884) 00:10:49.143 fused_ordering(885) 00:10:49.143 fused_ordering(886) 00:10:49.143 fused_ordering(887) 00:10:49.143 fused_ordering(888) 00:10:49.143 fused_ordering(889) 00:10:49.143 fused_ordering(890) 00:10:49.143 fused_ordering(891) 00:10:49.143 fused_ordering(892) 00:10:49.143 fused_ordering(893) 00:10:49.143 fused_ordering(894) 00:10:49.143 fused_ordering(895) 00:10:49.144 fused_ordering(896) 00:10:49.144 fused_ordering(897) 00:10:49.144 fused_ordering(898) 00:10:49.144 fused_ordering(899) 00:10:49.144 fused_ordering(900) 00:10:49.144 fused_ordering(901) 00:10:49.144 fused_ordering(902) 00:10:49.144 fused_ordering(903) 00:10:49.144 fused_ordering(904) 00:10:49.144 fused_ordering(905) 00:10:49.144 fused_ordering(906) 00:10:49.144 fused_ordering(907) 00:10:49.144 fused_ordering(908) 00:10:49.144 fused_ordering(909) 00:10:49.144 fused_ordering(910) 00:10:49.144 fused_ordering(911) 00:10:49.144 fused_ordering(912) 00:10:49.144 fused_ordering(913) 00:10:49.144 fused_ordering(914) 00:10:49.144 fused_ordering(915) 00:10:49.144 fused_ordering(916) 00:10:49.144 fused_ordering(917) 00:10:49.144 fused_ordering(918) 00:10:49.144 fused_ordering(919) 00:10:49.144 fused_ordering(920) 00:10:49.144 fused_ordering(921) 00:10:49.144 fused_ordering(922) 00:10:49.144 fused_ordering(923) 00:10:49.144 fused_ordering(924) 00:10:49.144 fused_ordering(925) 00:10:49.144 fused_ordering(926) 00:10:49.144 fused_ordering(927) 00:10:49.144 fused_ordering(928) 00:10:49.144 fused_ordering(929) 00:10:49.144 fused_ordering(930) 00:10:49.144 fused_ordering(931) 00:10:49.144 fused_ordering(932) 00:10:49.144 fused_ordering(933) 00:10:49.144 fused_ordering(934) 00:10:49.144 fused_ordering(935) 00:10:49.144 fused_ordering(936) 00:10:49.144 fused_ordering(937) 00:10:49.144 fused_ordering(938) 00:10:49.144 fused_ordering(939) 00:10:49.144 fused_ordering(940) 00:10:49.144 fused_ordering(941) 00:10:49.144 fused_ordering(942) 00:10:49.144 fused_ordering(943) 00:10:49.144 fused_ordering(944) 00:10:49.144 fused_ordering(945) 00:10:49.144 fused_ordering(946) 00:10:49.144 fused_ordering(947) 00:10:49.144 fused_ordering(948) 00:10:49.144 fused_ordering(949) 00:10:49.144 fused_ordering(950) 00:10:49.144 fused_ordering(951) 00:10:49.144 fused_ordering(952) 00:10:49.144 fused_ordering(953) 00:10:49.144 fused_ordering(954) 00:10:49.144 fused_ordering(955) 00:10:49.144 fused_ordering(956) 00:10:49.144 fused_ordering(957) 00:10:49.144 fused_ordering(958) 00:10:49.144 fused_ordering(959) 00:10:49.144 fused_ordering(960) 00:10:49.144 fused_ordering(961) 00:10:49.144 fused_ordering(962) 00:10:49.144 fused_ordering(963) 00:10:49.144 fused_ordering(964) 00:10:49.144 fused_ordering(965) 00:10:49.144 fused_ordering(966) 00:10:49.144 fused_ordering(967) 00:10:49.144 fused_ordering(968) 00:10:49.144 fused_ordering(969) 00:10:49.144 fused_ordering(970) 00:10:49.144 fused_ordering(971) 00:10:49.144 fused_ordering(972) 00:10:49.144 fused_ordering(973) 00:10:49.144 fused_ordering(974) 00:10:49.144 fused_ordering(975) 00:10:49.144 fused_ordering(976) 00:10:49.144 fused_ordering(977) 00:10:49.144 fused_ordering(978) 00:10:49.144 fused_ordering(979) 00:10:49.144 fused_ordering(980) 00:10:49.144 fused_ordering(981) 00:10:49.144 fused_ordering(982) 00:10:49.144 fused_ordering(983) 00:10:49.144 fused_ordering(984) 00:10:49.144 fused_ordering(985) 00:10:49.144 fused_ordering(986) 00:10:49.144 fused_ordering(987) 00:10:49.144 fused_ordering(988) 00:10:49.144 fused_ordering(989) 00:10:49.144 fused_ordering(990) 00:10:49.144 fused_ordering(991) 00:10:49.144 fused_ordering(992) 00:10:49.144 fused_ordering(993) 00:10:49.144 fused_ordering(994) 00:10:49.144 fused_ordering(995) 00:10:49.144 fused_ordering(996) 00:10:49.144 fused_ordering(997) 00:10:49.144 fused_ordering(998) 00:10:49.144 fused_ordering(999) 00:10:49.144 fused_ordering(1000) 00:10:49.144 fused_ordering(1001) 00:10:49.144 fused_ordering(1002) 00:10:49.144 fused_ordering(1003) 00:10:49.144 fused_ordering(1004) 00:10:49.144 fused_ordering(1005) 00:10:49.144 fused_ordering(1006) 00:10:49.144 fused_ordering(1007) 00:10:49.144 fused_ordering(1008) 00:10:49.144 fused_ordering(1009) 00:10:49.144 fused_ordering(1010) 00:10:49.144 fused_ordering(1011) 00:10:49.144 fused_ordering(1012) 00:10:49.144 fused_ordering(1013) 00:10:49.144 fused_ordering(1014) 00:10:49.144 fused_ordering(1015) 00:10:49.144 fused_ordering(1016) 00:10:49.144 fused_ordering(1017) 00:10:49.144 fused_ordering(1018) 00:10:49.144 fused_ordering(1019) 00:10:49.144 fused_ordering(1020) 00:10:49.144 fused_ordering(1021) 00:10:49.144 fused_ordering(1022) 00:10:49.144 fused_ordering(1023) 00:10:49.144 04:02:03 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:49.144 04:02:03 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:49.144 04:02:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:49.144 04:02:03 -- nvmf/common.sh@117 -- # sync 00:10:49.144 04:02:03 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:49.144 04:02:03 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:49.144 04:02:03 -- nvmf/common.sh@120 -- # set +e 00:10:49.144 04:02:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:49.144 04:02:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:49.144 rmmod nvme_rdma 00:10:49.144 rmmod nvme_fabrics 00:10:49.144 04:02:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:49.144 04:02:03 -- nvmf/common.sh@124 -- # set -e 00:10:49.144 04:02:03 -- nvmf/common.sh@125 -- # return 0 00:10:49.144 04:02:03 -- nvmf/common.sh@478 -- # '[' -n 219023 ']' 00:10:49.144 04:02:03 -- nvmf/common.sh@479 -- # killprocess 219023 00:10:49.144 04:02:03 -- common/autotest_common.sh@936 -- # '[' -z 219023 ']' 00:10:49.144 04:02:03 -- common/autotest_common.sh@940 -- # kill -0 219023 00:10:49.144 04:02:03 -- common/autotest_common.sh@941 -- # uname 00:10:49.144 04:02:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:49.144 04:02:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 219023 00:10:49.144 04:02:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:49.144 04:02:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:49.144 04:02:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 219023' 00:10:49.144 killing process with pid 219023 00:10:49.144 04:02:03 -- common/autotest_common.sh@955 -- # kill 219023 00:10:49.144 04:02:03 -- common/autotest_common.sh@960 -- # wait 219023 00:10:49.404 04:02:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:49.404 04:02:03 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:49.404 00:10:49.404 real 0m7.366s 00:10:49.404 user 0m4.209s 00:10:49.404 sys 0m4.305s 00:10:49.404 04:02:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:49.404 04:02:03 -- common/autotest_common.sh@10 -- # set +x 00:10:49.404 ************************************ 00:10:49.404 END TEST nvmf_fused_ordering 00:10:49.404 ************************************ 00:10:49.404 04:02:03 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:49.404 04:02:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:49.404 04:02:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:49.404 04:02:03 -- common/autotest_common.sh@10 -- # set +x 00:10:49.663 ************************************ 00:10:49.663 START TEST nvmf_delete_subsystem 00:10:49.663 ************************************ 00:10:49.663 04:02:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:10:49.663 * Looking for test storage... 00:10:49.663 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:49.663 04:02:04 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.663 04:02:04 -- nvmf/common.sh@7 -- # uname -s 00:10:49.663 04:02:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.663 04:02:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.663 04:02:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.663 04:02:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.663 04:02:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.663 04:02:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.663 04:02:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.663 04:02:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.663 04:02:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.663 04:02:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.663 04:02:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:49.663 04:02:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:49.663 04:02:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.663 04:02:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.663 04:02:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.663 04:02:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.663 04:02:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:49.663 04:02:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.663 04:02:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.663 04:02:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.663 04:02:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.663 04:02:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.663 04:02:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.663 04:02:04 -- paths/export.sh@5 -- # export PATH 00:10:49.663 04:02:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.663 04:02:04 -- nvmf/common.sh@47 -- # : 0 00:10:49.663 04:02:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.663 04:02:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.663 04:02:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.663 04:02:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.663 04:02:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.663 04:02:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.663 04:02:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.663 04:02:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.663 04:02:04 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:49.663 04:02:04 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:49.663 04:02:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.663 04:02:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:49.663 04:02:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:49.663 04:02:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:49.663 04:02:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.663 04:02:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.663 04:02:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.663 04:02:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:49.663 04:02:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:49.663 04:02:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:49.663 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:10:54.938 04:02:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:54.938 04:02:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:54.938 04:02:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:54.938 04:02:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:54.938 04:02:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:54.938 04:02:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:54.938 04:02:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:54.938 04:02:08 -- nvmf/common.sh@295 -- # net_devs=() 00:10:54.938 04:02:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:54.938 04:02:08 -- nvmf/common.sh@296 -- # e810=() 00:10:54.939 04:02:08 -- nvmf/common.sh@296 -- # local -ga e810 00:10:54.939 04:02:08 -- nvmf/common.sh@297 -- # x722=() 00:10:54.939 04:02:08 -- nvmf/common.sh@297 -- # local -ga x722 00:10:54.939 04:02:08 -- nvmf/common.sh@298 -- # mlx=() 00:10:54.939 04:02:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:54.939 04:02:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.939 04:02:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.939 04:02:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.939 04:02:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.939 04:02:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.939 04:02:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.939 04:02:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.939 04:02:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.939 04:02:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.939 04:02:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.939 04:02:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.939 04:02:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:54.939 04:02:08 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:54.939 04:02:08 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:54.939 04:02:08 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:54.939 04:02:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:54.939 04:02:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.939 04:02:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:54.939 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:54.939 04:02:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:54.939 04:02:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.939 04:02:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:54.939 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:54.939 04:02:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:54.939 04:02:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:54.939 04:02:08 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.939 04:02:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.939 04:02:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:54.939 04:02:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.939 04:02:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:54.939 Found net devices under 0000:18:00.0: mlx_0_0 00:10:54.939 04:02:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.939 04:02:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.939 04:02:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.939 04:02:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:54.939 04:02:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.939 04:02:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:54.939 Found net devices under 0000:18:00.1: mlx_0_1 00:10:54.939 04:02:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.939 04:02:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:54.939 04:02:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:54.939 04:02:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:54.939 04:02:08 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:54.939 04:02:08 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:54.939 04:02:08 -- nvmf/common.sh@58 -- # uname 00:10:54.939 04:02:08 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:54.939 04:02:08 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:54.939 04:02:08 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:54.939 04:02:08 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:54.939 04:02:08 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:54.939 04:02:08 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:54.939 04:02:08 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:54.939 04:02:08 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:54.939 04:02:09 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:54.939 04:02:09 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:54.939 04:02:09 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:54.939 04:02:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:54.939 04:02:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:54.939 04:02:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:54.939 04:02:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:54.939 04:02:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:54.939 04:02:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.939 04:02:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.939 04:02:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:54.939 04:02:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:54.939 04:02:09 -- nvmf/common.sh@105 -- # continue 2 00:10:54.939 04:02:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.939 04:02:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.939 04:02:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:54.939 04:02:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.939 04:02:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:54.939 04:02:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:54.939 04:02:09 -- nvmf/common.sh@105 -- # continue 2 00:10:54.939 04:02:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:54.939 04:02:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:54.939 04:02:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.939 04:02:09 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:54.939 04:02:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:54.939 04:02:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:54.939 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:54.939 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:54.939 altname enp24s0f0np0 00:10:54.939 altname ens785f0np0 00:10:54.939 inet 192.168.100.8/24 scope global mlx_0_0 00:10:54.939 valid_lft forever preferred_lft forever 00:10:54.939 04:02:09 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:54.939 04:02:09 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:54.939 04:02:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.939 04:02:09 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:54.939 04:02:09 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:54.939 04:02:09 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:54.939 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:54.939 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:54.939 altname enp24s0f1np1 00:10:54.939 altname ens785f1np1 00:10:54.939 inet 192.168.100.9/24 scope global mlx_0_1 00:10:54.939 valid_lft forever preferred_lft forever 00:10:54.939 04:02:09 -- nvmf/common.sh@411 -- # return 0 00:10:54.939 04:02:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:54.939 04:02:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:54.939 04:02:09 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:54.939 04:02:09 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:54.939 04:02:09 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:54.939 04:02:09 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:54.939 04:02:09 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:54.939 04:02:09 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:54.939 04:02:09 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:54.939 04:02:09 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:54.939 04:02:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.939 04:02:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.939 04:02:09 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:54.939 04:02:09 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:54.939 04:02:09 -- nvmf/common.sh@105 -- # continue 2 00:10:54.939 04:02:09 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:54.939 04:02:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.939 04:02:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:54.939 04:02:09 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:54.939 04:02:09 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:54.939 04:02:09 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:54.939 04:02:09 -- nvmf/common.sh@105 -- # continue 2 00:10:54.939 04:02:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:54.939 04:02:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:54.939 04:02:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.939 04:02:09 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:54.939 04:02:09 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:54.939 04:02:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:54.939 04:02:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:54.939 04:02:09 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:54.939 192.168.100.9' 00:10:54.940 04:02:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:54.940 192.168.100.9' 00:10:54.940 04:02:09 -- nvmf/common.sh@446 -- # head -n 1 00:10:54.940 04:02:09 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:54.940 04:02:09 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:54.940 192.168.100.9' 00:10:54.940 04:02:09 -- nvmf/common.sh@447 -- # tail -n +2 00:10:54.940 04:02:09 -- nvmf/common.sh@447 -- # head -n 1 00:10:54.940 04:02:09 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:54.940 04:02:09 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:54.940 04:02:09 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:54.940 04:02:09 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:54.940 04:02:09 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:54.940 04:02:09 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:54.940 04:02:09 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:54.940 04:02:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:54.940 04:02:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:54.940 04:02:09 -- common/autotest_common.sh@10 -- # set +x 00:10:54.940 04:02:09 -- nvmf/common.sh@470 -- # nvmfpid=222506 00:10:54.940 04:02:09 -- nvmf/common.sh@471 -- # waitforlisten 222506 00:10:54.940 04:02:09 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:54.940 04:02:09 -- common/autotest_common.sh@817 -- # '[' -z 222506 ']' 00:10:54.940 04:02:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.940 04:02:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:54.940 04:02:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.940 04:02:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:54.940 04:02:09 -- common/autotest_common.sh@10 -- # set +x 00:10:54.940 [2024-04-19 04:02:09.208601] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:10:54.940 [2024-04-19 04:02:09.208649] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.940 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.940 [2024-04-19 04:02:09.260967] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:54.940 [2024-04-19 04:02:09.334958] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.940 [2024-04-19 04:02:09.334992] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.940 [2024-04-19 04:02:09.334999] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.940 [2024-04-19 04:02:09.335005] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.940 [2024-04-19 04:02:09.335009] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.940 [2024-04-19 04:02:09.335045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.940 [2024-04-19 04:02:09.335047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.509 04:02:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:55.509 04:02:09 -- common/autotest_common.sh@850 -- # return 0 00:10:55.509 04:02:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:55.509 04:02:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:55.509 04:02:09 -- common/autotest_common.sh@10 -- # set +x 00:10:55.509 04:02:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.509 04:02:10 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:55.509 04:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.509 04:02:10 -- common/autotest_common.sh@10 -- # set +x 00:10:55.769 [2024-04-19 04:02:10.046563] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2187060/0x218b550) succeed. 00:10:55.769 [2024-04-19 04:02:10.054656] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2188560/0x21ccbe0) succeed. 00:10:55.769 04:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.769 04:02:10 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:55.769 04:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.769 04:02:10 -- common/autotest_common.sh@10 -- # set +x 00:10:55.769 04:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.769 04:02:10 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:55.769 04:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.769 04:02:10 -- common/autotest_common.sh@10 -- # set +x 00:10:55.769 [2024-04-19 04:02:10.137397] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:55.769 04:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.769 04:02:10 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:55.769 04:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.769 04:02:10 -- common/autotest_common.sh@10 -- # set +x 00:10:55.769 NULL1 00:10:55.769 04:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.769 04:02:10 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:55.769 04:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.769 04:02:10 -- common/autotest_common.sh@10 -- # set +x 00:10:55.769 Delay0 00:10:55.769 04:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.769 04:02:10 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.769 04:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.769 04:02:10 -- common/autotest_common.sh@10 -- # set +x 00:10:55.769 04:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.769 04:02:10 -- target/delete_subsystem.sh@28 -- # perf_pid=222639 00:10:55.769 04:02:10 -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:55.769 04:02:10 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:55.769 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.769 [2024-04-19 04:02:10.233622] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:57.677 04:02:12 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.677 04:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.677 04:02:12 -- common/autotest_common.sh@10 -- # set +x 00:10:59.057 NVMe io qpair process completion error 00:10:59.057 NVMe io qpair process completion error 00:10:59.057 NVMe io qpair process completion error 00:10:59.057 NVMe io qpair process completion error 00:10:59.057 NVMe io qpair process completion error 00:10:59.057 NVMe io qpair process completion error 00:10:59.057 04:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.057 04:02:13 -- target/delete_subsystem.sh@34 -- # delay=0 00:10:59.057 04:02:13 -- target/delete_subsystem.sh@35 -- # kill -0 222639 00:10:59.057 04:02:13 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:59.316 04:02:13 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:59.316 04:02:13 -- target/delete_subsystem.sh@35 -- # kill -0 222639 00:10:59.316 04:02:13 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 starting I/O failed: -6 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.885 Read completed with error (sct=0, sc=8) 00:10:59.885 Write completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 starting I/O failed: -6 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Write completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.886 Read completed with error (sct=0, sc=8) 00:10:59.887 Read completed with error (sct=0, sc=8) 00:10:59.887 Read completed with error (sct=0, sc=8) 00:10:59.887 Read completed with error (sct=0, sc=8) 00:10:59.887 Write completed with error (sct=0, sc=8) 00:10:59.887 Write completed with error (sct=0, sc=8) 00:10:59.887 Write completed with error (sct=0, sc=8) 00:10:59.887 Read completed with error (sct=0, sc=8) 00:10:59.887 Read completed with error (sct=0, sc=8) 00:10:59.887 Read completed with error (sct=0, sc=8) 00:10:59.887 Write completed with error (sct=0, sc=8) 00:10:59.887 Read completed with error (sct=0, sc=8) 00:10:59.887 Write completed with error (sct=0, sc=8) 00:10:59.887 Read completed with error (sct=0, sc=8) 00:10:59.887 Read completed with error (sct=0, sc=8) 00:10:59.887 Read completed with error (sct=0, sc=8) 00:10:59.887 Write completed with error (sct=0, sc=8) 00:10:59.887 [2024-04-19 04:02:14.306206] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:10:59.887 04:02:14 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:59.887 04:02:14 -- target/delete_subsystem.sh@35 -- # kill -0 222639 00:10:59.887 04:02:14 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:59.887 [2024-04-19 04:02:14.319948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:10:59.887 [2024-04-19 04:02:14.319965] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:59.887 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:59.887 Initializing NVMe Controllers 00:10:59.887 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.887 Controller IO queue size 128, less than required. 00:10:59.887 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:59.887 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:59.887 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:59.887 Initialization complete. Launching workers. 00:10:59.887 ======================================================== 00:10:59.887 Latency(us) 00:10:59.887 Device Information : IOPS MiB/s Average min max 00:10:59.887 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.48 0.04 1593684.71 1000070.00 2976752.42 00:10:59.887 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.48 0.04 1595188.71 1000281.53 2978105.99 00:10:59.887 ======================================================== 00:10:59.887 Total : 160.97 0.08 1594436.71 1000070.00 2978105.99 00:10:59.887 00:11:00.457 04:02:14 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:00.457 04:02:14 -- target/delete_subsystem.sh@35 -- # kill -0 222639 00:11:00.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (222639) - No such process 00:11:00.457 04:02:14 -- target/delete_subsystem.sh@45 -- # NOT wait 222639 00:11:00.457 04:02:14 -- common/autotest_common.sh@638 -- # local es=0 00:11:00.457 04:02:14 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 222639 00:11:00.457 04:02:14 -- common/autotest_common.sh@626 -- # local arg=wait 00:11:00.457 04:02:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:00.457 04:02:14 -- common/autotest_common.sh@630 -- # type -t wait 00:11:00.457 04:02:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:00.457 04:02:14 -- common/autotest_common.sh@641 -- # wait 222639 00:11:00.457 04:02:14 -- common/autotest_common.sh@641 -- # es=1 00:11:00.457 04:02:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:00.457 04:02:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:00.457 04:02:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:00.457 04:02:14 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:00.457 04:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.457 04:02:14 -- common/autotest_common.sh@10 -- # set +x 00:11:00.457 04:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.457 04:02:14 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:00.457 04:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.457 04:02:14 -- common/autotest_common.sh@10 -- # set +x 00:11:00.457 [2024-04-19 04:02:14.828952] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:00.457 04:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.457 04:02:14 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.457 04:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.457 04:02:14 -- common/autotest_common.sh@10 -- # set +x 00:11:00.457 04:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.457 04:02:14 -- target/delete_subsystem.sh@54 -- # perf_pid=223511 00:11:00.457 04:02:14 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:00.457 04:02:14 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:00.457 04:02:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:00.457 04:02:14 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:00.457 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.457 [2024-04-19 04:02:14.907092] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:01.027 04:02:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:01.027 04:02:15 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:01.027 04:02:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:01.597 04:02:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:01.597 04:02:15 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:01.597 04:02:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:01.857 04:02:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:01.857 04:02:16 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:01.857 04:02:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:02.427 04:02:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:02.427 04:02:16 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:02.427 04:02:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:02.996 04:02:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:02.996 04:02:17 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:02.996 04:02:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:03.566 04:02:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:03.566 04:02:17 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:03.566 04:02:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:04.134 04:02:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:04.134 04:02:18 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:04.134 04:02:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:04.394 04:02:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:04.394 04:02:18 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:04.394 04:02:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:04.963 04:02:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:04.963 04:02:19 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:04.963 04:02:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:05.533 04:02:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:05.533 04:02:19 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:05.533 04:02:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:06.102 04:02:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:06.102 04:02:20 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:06.102 04:02:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:06.362 04:02:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:06.362 04:02:20 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:06.362 04:02:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:06.931 04:02:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:06.931 04:02:21 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:06.931 04:02:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:07.500 04:02:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:07.500 04:02:21 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:07.500 04:02:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:07.760 Initializing NVMe Controllers 00:11:07.760 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:07.760 Controller IO queue size 128, less than required. 00:11:07.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:07.760 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:07.760 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:07.760 Initialization complete. Launching workers. 00:11:07.760 ======================================================== 00:11:07.760 Latency(us) 00:11:07.760 Device Information : IOPS MiB/s Average min max 00:11:07.760 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001262.49 1000056.23 1003951.01 00:11:07.760 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002621.06 1000262.56 1005536.89 00:11:07.760 ======================================================== 00:11:07.760 Total : 256.00 0.12 1001941.77 1000056.23 1005536.89 00:11:07.760 00:11:08.020 04:02:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:08.020 04:02:22 -- target/delete_subsystem.sh@57 -- # kill -0 223511 00:11:08.020 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (223511) - No such process 00:11:08.020 04:02:22 -- target/delete_subsystem.sh@67 -- # wait 223511 00:11:08.020 04:02:22 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:08.020 04:02:22 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:08.020 04:02:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:08.020 04:02:22 -- nvmf/common.sh@117 -- # sync 00:11:08.020 04:02:22 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:08.020 04:02:22 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:08.020 04:02:22 -- nvmf/common.sh@120 -- # set +e 00:11:08.020 04:02:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.020 04:02:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:08.020 rmmod nvme_rdma 00:11:08.020 rmmod nvme_fabrics 00:11:08.020 04:02:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.020 04:02:22 -- nvmf/common.sh@124 -- # set -e 00:11:08.020 04:02:22 -- nvmf/common.sh@125 -- # return 0 00:11:08.020 04:02:22 -- nvmf/common.sh@478 -- # '[' -n 222506 ']' 00:11:08.020 04:02:22 -- nvmf/common.sh@479 -- # killprocess 222506 00:11:08.020 04:02:22 -- common/autotest_common.sh@936 -- # '[' -z 222506 ']' 00:11:08.020 04:02:22 -- common/autotest_common.sh@940 -- # kill -0 222506 00:11:08.020 04:02:22 -- common/autotest_common.sh@941 -- # uname 00:11:08.020 04:02:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:08.020 04:02:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 222506 00:11:08.020 04:02:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:08.020 04:02:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:08.020 04:02:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 222506' 00:11:08.020 killing process with pid 222506 00:11:08.020 04:02:22 -- common/autotest_common.sh@955 -- # kill 222506 00:11:08.020 04:02:22 -- common/autotest_common.sh@960 -- # wait 222506 00:11:08.281 04:02:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:08.281 04:02:22 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:08.281 00:11:08.281 real 0m18.662s 00:11:08.281 user 0m49.477s 00:11:08.281 sys 0m4.718s 00:11:08.281 04:02:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:08.281 04:02:22 -- common/autotest_common.sh@10 -- # set +x 00:11:08.281 ************************************ 00:11:08.281 END TEST nvmf_delete_subsystem 00:11:08.281 ************************************ 00:11:08.281 04:02:22 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:11:08.281 04:02:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:08.281 04:02:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:08.281 04:02:22 -- common/autotest_common.sh@10 -- # set +x 00:11:08.541 ************************************ 00:11:08.541 START TEST nvmf_ns_masking 00:11:08.541 ************************************ 00:11:08.541 04:02:22 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:11:08.541 * Looking for test storage... 00:11:08.541 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:08.541 04:02:22 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.541 04:02:22 -- nvmf/common.sh@7 -- # uname -s 00:11:08.541 04:02:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.541 04:02:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.541 04:02:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.541 04:02:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.541 04:02:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.541 04:02:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.541 04:02:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.541 04:02:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.541 04:02:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.541 04:02:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.541 04:02:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:08.541 04:02:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:08.541 04:02:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.541 04:02:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.541 04:02:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.541 04:02:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.541 04:02:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:08.541 04:02:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.541 04:02:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.541 04:02:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.541 04:02:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.541 04:02:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.541 04:02:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.541 04:02:23 -- paths/export.sh@5 -- # export PATH 00:11:08.541 04:02:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.541 04:02:23 -- nvmf/common.sh@47 -- # : 0 00:11:08.541 04:02:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.541 04:02:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.541 04:02:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.541 04:02:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.541 04:02:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.541 04:02:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.541 04:02:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.541 04:02:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.541 04:02:23 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:08.541 04:02:23 -- target/ns_masking.sh@11 -- # loops=5 00:11:08.541 04:02:23 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:08.541 04:02:23 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:08.541 04:02:23 -- target/ns_masking.sh@15 -- # uuidgen 00:11:08.541 04:02:23 -- target/ns_masking.sh@15 -- # HOSTID=11dfd9b9-138f-43d7-9ca7-f8958c0ff752 00:11:08.541 04:02:23 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:08.541 04:02:23 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:08.541 04:02:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.541 04:02:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:08.541 04:02:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:08.541 04:02:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:08.541 04:02:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.541 04:02:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.541 04:02:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.541 04:02:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:08.541 04:02:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:08.541 04:02:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:08.541 04:02:23 -- common/autotest_common.sh@10 -- # set +x 00:11:13.826 04:02:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:13.826 04:02:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:13.826 04:02:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:13.826 04:02:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:13.826 04:02:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:13.826 04:02:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:13.826 04:02:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:13.826 04:02:27 -- nvmf/common.sh@295 -- # net_devs=() 00:11:13.826 04:02:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:13.826 04:02:27 -- nvmf/common.sh@296 -- # e810=() 00:11:13.826 04:02:27 -- nvmf/common.sh@296 -- # local -ga e810 00:11:13.826 04:02:27 -- nvmf/common.sh@297 -- # x722=() 00:11:13.826 04:02:27 -- nvmf/common.sh@297 -- # local -ga x722 00:11:13.826 04:02:27 -- nvmf/common.sh@298 -- # mlx=() 00:11:13.826 04:02:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:13.826 04:02:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.826 04:02:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.826 04:02:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.826 04:02:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.826 04:02:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.826 04:02:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.826 04:02:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.826 04:02:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.826 04:02:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.826 04:02:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.826 04:02:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.826 04:02:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:13.826 04:02:27 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:13.826 04:02:27 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:13.826 04:02:27 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:13.826 04:02:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:13.826 04:02:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.826 04:02:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:13.826 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:13.826 04:02:27 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:13.826 04:02:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.826 04:02:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:13.826 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:13.826 04:02:27 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:13.826 04:02:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:13.826 04:02:27 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.826 04:02:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.826 04:02:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:13.826 04:02:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.826 04:02:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:13.826 Found net devices under 0000:18:00.0: mlx_0_0 00:11:13.826 04:02:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.826 04:02:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.826 04:02:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.826 04:02:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:13.826 04:02:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.826 04:02:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:13.826 Found net devices under 0000:18:00.1: mlx_0_1 00:11:13.826 04:02:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.826 04:02:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:13.826 04:02:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:13.826 04:02:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:13.826 04:02:27 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:13.826 04:02:27 -- nvmf/common.sh@58 -- # uname 00:11:13.826 04:02:27 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:13.826 04:02:27 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:13.826 04:02:27 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:13.826 04:02:27 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:13.826 04:02:27 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:13.826 04:02:27 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:13.826 04:02:27 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:13.826 04:02:27 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:13.826 04:02:27 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:13.826 04:02:27 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:13.826 04:02:27 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:13.826 04:02:27 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:13.826 04:02:27 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:13.826 04:02:27 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:13.826 04:02:27 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:13.826 04:02:27 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:13.826 04:02:27 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:13.826 04:02:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.826 04:02:27 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:13.826 04:02:27 -- nvmf/common.sh@105 -- # continue 2 00:11:13.826 04:02:27 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:13.826 04:02:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.826 04:02:27 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.826 04:02:27 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:13.826 04:02:27 -- nvmf/common.sh@105 -- # continue 2 00:11:13.826 04:02:27 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:13.826 04:02:27 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:13.826 04:02:27 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:13.826 04:02:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:13.826 04:02:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:13.826 04:02:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:13.826 04:02:27 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:13.826 04:02:27 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:13.826 04:02:27 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:13.826 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:13.827 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:13.827 altname enp24s0f0np0 00:11:13.827 altname ens785f0np0 00:11:13.827 inet 192.168.100.8/24 scope global mlx_0_0 00:11:13.827 valid_lft forever preferred_lft forever 00:11:13.827 04:02:27 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:13.827 04:02:27 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:13.827 04:02:27 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:13.827 04:02:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:13.827 04:02:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:13.827 04:02:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:13.827 04:02:27 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:13.827 04:02:27 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:13.827 04:02:27 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:13.827 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:13.827 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:13.827 altname enp24s0f1np1 00:11:13.827 altname ens785f1np1 00:11:13.827 inet 192.168.100.9/24 scope global mlx_0_1 00:11:13.827 valid_lft forever preferred_lft forever 00:11:13.827 04:02:27 -- nvmf/common.sh@411 -- # return 0 00:11:13.827 04:02:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:13.827 04:02:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:13.827 04:02:27 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:13.827 04:02:27 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:13.827 04:02:27 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:13.827 04:02:27 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:13.827 04:02:27 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:13.827 04:02:27 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:13.827 04:02:27 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:13.827 04:02:28 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:13.827 04:02:28 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:13.827 04:02:28 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.827 04:02:28 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:13.827 04:02:28 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:13.827 04:02:28 -- nvmf/common.sh@105 -- # continue 2 00:11:13.827 04:02:28 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:13.827 04:02:28 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.827 04:02:28 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:13.827 04:02:28 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.827 04:02:28 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:13.827 04:02:28 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:13.827 04:02:28 -- nvmf/common.sh@105 -- # continue 2 00:11:13.827 04:02:28 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:13.827 04:02:28 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:13.827 04:02:28 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:13.827 04:02:28 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:13.827 04:02:28 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:13.827 04:02:28 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:13.827 04:02:28 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:13.827 04:02:28 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:13.827 04:02:28 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:13.827 04:02:28 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:13.827 04:02:28 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:13.827 04:02:28 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:13.827 04:02:28 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:13.827 192.168.100.9' 00:11:13.827 04:02:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:13.827 192.168.100.9' 00:11:13.827 04:02:28 -- nvmf/common.sh@446 -- # head -n 1 00:11:13.827 04:02:28 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:13.827 04:02:28 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:13.827 192.168.100.9' 00:11:13.827 04:02:28 -- nvmf/common.sh@447 -- # tail -n +2 00:11:13.827 04:02:28 -- nvmf/common.sh@447 -- # head -n 1 00:11:13.827 04:02:28 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:13.827 04:02:28 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:13.827 04:02:28 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:13.827 04:02:28 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:13.827 04:02:28 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:13.827 04:02:28 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:13.827 04:02:28 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:13.827 04:02:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:13.827 04:02:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:13.827 04:02:28 -- common/autotest_common.sh@10 -- # set +x 00:11:13.827 04:02:28 -- nvmf/common.sh@470 -- # nvmfpid=228205 00:11:13.827 04:02:28 -- nvmf/common.sh@471 -- # waitforlisten 228205 00:11:13.827 04:02:28 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.827 04:02:28 -- common/autotest_common.sh@817 -- # '[' -z 228205 ']' 00:11:13.827 04:02:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.827 04:02:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:13.827 04:02:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.827 04:02:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:13.827 04:02:28 -- common/autotest_common.sh@10 -- # set +x 00:11:13.827 [2024-04-19 04:02:28.123921] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:11:13.827 [2024-04-19 04:02:28.123959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.827 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.827 [2024-04-19 04:02:28.172908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.827 [2024-04-19 04:02:28.245713] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.827 [2024-04-19 04:02:28.245748] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.827 [2024-04-19 04:02:28.245754] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.827 [2024-04-19 04:02:28.245760] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.827 [2024-04-19 04:02:28.245765] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.827 [2024-04-19 04:02:28.245801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.827 [2024-04-19 04:02:28.245882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.827 [2024-04-19 04:02:28.245967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.827 [2024-04-19 04:02:28.245968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.396 04:02:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:14.396 04:02:28 -- common/autotest_common.sh@850 -- # return 0 00:11:14.396 04:02:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:14.396 04:02:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:14.396 04:02:28 -- common/autotest_common.sh@10 -- # set +x 00:11:14.683 04:02:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.683 04:02:28 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:14.683 [2024-04-19 04:02:29.100482] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19456c0/0x1949bb0) succeed. 00:11:14.683 [2024-04-19 04:02:29.109780] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1946cb0/0x198b240) succeed. 00:11:14.986 04:02:29 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:14.986 04:02:29 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:14.986 04:02:29 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:14.986 Malloc1 00:11:14.986 04:02:29 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:15.250 Malloc2 00:11:15.250 04:02:29 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.250 04:02:29 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:15.513 04:02:29 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:15.779 [2024-04-19 04:02:30.079052] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:15.779 04:02:30 -- target/ns_masking.sh@61 -- # connect 00:11:15.779 04:02:30 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11dfd9b9-138f-43d7-9ca7-f8958c0ff752 -a 192.168.100.8 -s 4420 -i 4 00:11:16.048 04:02:30 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.048 04:02:30 -- common/autotest_common.sh@1184 -- # local i=0 00:11:16.048 04:02:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.048 04:02:30 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:16.048 04:02:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:18.044 04:02:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:18.044 04:02:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:18.044 04:02:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.044 04:02:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:18.044 04:02:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.044 04:02:32 -- common/autotest_common.sh@1194 -- # return 0 00:11:18.044 04:02:32 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:18.044 04:02:32 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:18.044 04:02:32 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:18.044 04:02:32 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:18.044 04:02:32 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:18.044 04:02:32 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:18.044 04:02:32 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:18.044 [ 0]:0x1 00:11:18.044 04:02:32 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:18.044 04:02:32 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:18.044 04:02:32 -- target/ns_masking.sh@40 -- # nguid=67b21ac2d48b41459367f3d389ebdb66 00:11:18.044 04:02:32 -- target/ns_masking.sh@41 -- # [[ 67b21ac2d48b41459367f3d389ebdb66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.045 04:02:32 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:18.341 04:02:32 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:18.341 04:02:32 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:18.341 04:02:32 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:18.341 [ 0]:0x1 00:11:18.341 04:02:32 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:18.341 04:02:32 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:18.341 04:02:32 -- target/ns_masking.sh@40 -- # nguid=67b21ac2d48b41459367f3d389ebdb66 00:11:18.341 04:02:32 -- target/ns_masking.sh@41 -- # [[ 67b21ac2d48b41459367f3d389ebdb66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.341 04:02:32 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:18.341 04:02:32 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:18.341 04:02:32 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:18.341 [ 1]:0x2 00:11:18.341 04:02:32 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:18.341 04:02:32 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:18.341 04:02:32 -- target/ns_masking.sh@40 -- # nguid=fd541d96a476497db19ac60c864f807a 00:11:18.341 04:02:32 -- target/ns_masking.sh@41 -- # [[ fd541d96a476497db19ac60c864f807a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:18.341 04:02:32 -- target/ns_masking.sh@69 -- # disconnect 00:11:18.341 04:02:32 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.920 04:02:33 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.920 04:02:33 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:19.183 04:02:33 -- target/ns_masking.sh@77 -- # connect 1 00:11:19.183 04:02:33 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11dfd9b9-138f-43d7-9ca7-f8958c0ff752 -a 192.168.100.8 -s 4420 -i 4 00:11:19.452 04:02:33 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:19.452 04:02:33 -- common/autotest_common.sh@1184 -- # local i=0 00:11:19.452 04:02:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.452 04:02:33 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:11:19.452 04:02:33 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:11:19.452 04:02:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:21.410 04:02:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:21.410 04:02:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:21.410 04:02:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.410 04:02:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:21.410 04:02:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.410 04:02:35 -- common/autotest_common.sh@1194 -- # return 0 00:11:21.410 04:02:35 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:21.410 04:02:35 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:21.410 04:02:35 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:21.410 04:02:35 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:21.410 04:02:35 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:21.410 04:02:35 -- common/autotest_common.sh@638 -- # local es=0 00:11:21.410 04:02:35 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:21.410 04:02:35 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:21.410 04:02:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:21.410 04:02:35 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:21.410 04:02:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:21.410 04:02:35 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:21.410 04:02:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:21.410 04:02:35 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:21.410 04:02:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.410 04:02:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:21.410 04:02:35 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:21.410 04:02:35 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.410 04:02:35 -- common/autotest_common.sh@641 -- # es=1 00:11:21.410 04:02:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:21.410 04:02:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:21.410 04:02:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:21.410 04:02:35 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:21.410 04:02:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:21.410 04:02:35 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:21.410 [ 0]:0x2 00:11:21.410 04:02:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:21.410 04:02:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.679 04:02:35 -- target/ns_masking.sh@40 -- # nguid=fd541d96a476497db19ac60c864f807a 00:11:21.679 04:02:35 -- target/ns_masking.sh@41 -- # [[ fd541d96a476497db19ac60c864f807a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.680 04:02:35 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:21.680 04:02:36 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:21.680 04:02:36 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:21.680 04:02:36 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:21.680 [ 0]:0x1 00:11:21.680 04:02:36 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.680 04:02:36 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:21.680 04:02:36 -- target/ns_masking.sh@40 -- # nguid=67b21ac2d48b41459367f3d389ebdb66 00:11:21.680 04:02:36 -- target/ns_masking.sh@41 -- # [[ 67b21ac2d48b41459367f3d389ebdb66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.680 04:02:36 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:21.680 04:02:36 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:21.680 04:02:36 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:21.680 [ 1]:0x2 00:11:21.680 04:02:36 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.680 04:02:36 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:21.952 04:02:36 -- target/ns_masking.sh@40 -- # nguid=fd541d96a476497db19ac60c864f807a 00:11:21.952 04:02:36 -- target/ns_masking.sh@41 -- # [[ fd541d96a476497db19ac60c864f807a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.952 04:02:36 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:21.952 04:02:36 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:21.952 04:02:36 -- common/autotest_common.sh@638 -- # local es=0 00:11:21.952 04:02:36 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:21.952 04:02:36 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:21.952 04:02:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:21.952 04:02:36 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:21.952 04:02:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:21.952 04:02:36 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:21.952 04:02:36 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:21.952 04:02:36 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:21.952 04:02:36 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:21.952 04:02:36 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:21.952 04:02:36 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:21.952 04:02:36 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.952 04:02:36 -- common/autotest_common.sh@641 -- # es=1 00:11:21.952 04:02:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:21.952 04:02:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:21.952 04:02:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:21.952 04:02:36 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:21.952 04:02:36 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:21.953 04:02:36 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:21.953 [ 0]:0x2 00:11:21.953 04:02:36 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:21.953 04:02:36 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:21.953 04:02:36 -- target/ns_masking.sh@40 -- # nguid=fd541d96a476497db19ac60c864f807a 00:11:21.953 04:02:36 -- target/ns_masking.sh@41 -- # [[ fd541d96a476497db19ac60c864f807a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:21.953 04:02:36 -- target/ns_masking.sh@91 -- # disconnect 00:11:21.953 04:02:36 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.539 04:02:36 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:22.539 04:02:36 -- target/ns_masking.sh@95 -- # connect 2 00:11:22.539 04:02:36 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11dfd9b9-138f-43d7-9ca7-f8958c0ff752 -a 192.168.100.8 -s 4420 -i 4 00:11:22.807 04:02:37 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:22.807 04:02:37 -- common/autotest_common.sh@1184 -- # local i=0 00:11:22.807 04:02:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.807 04:02:37 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:22.807 04:02:37 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:22.807 04:02:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:25.381 04:02:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:25.381 04:02:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:25.381 04:02:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.382 04:02:39 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:25.382 04:02:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.382 04:02:39 -- common/autotest_common.sh@1194 -- # return 0 00:11:25.382 04:02:39 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:25.382 04:02:39 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:25.382 04:02:39 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:25.382 04:02:39 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:25.382 04:02:39 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:25.382 04:02:39 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:25.382 04:02:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:25.382 [ 0]:0x1 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # nguid=67b21ac2d48b41459367f3d389ebdb66 00:11:25.382 04:02:39 -- target/ns_masking.sh@41 -- # [[ 67b21ac2d48b41459367f3d389ebdb66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.382 04:02:39 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:25.382 04:02:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:25.382 04:02:39 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:25.382 [ 1]:0x2 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # nguid=fd541d96a476497db19ac60c864f807a 00:11:25.382 04:02:39 -- target/ns_masking.sh@41 -- # [[ fd541d96a476497db19ac60c864f807a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.382 04:02:39 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:25.382 04:02:39 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:25.382 04:02:39 -- common/autotest_common.sh@638 -- # local es=0 00:11:25.382 04:02:39 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:25.382 04:02:39 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:25.382 04:02:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:25.382 04:02:39 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:25.382 04:02:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:25.382 04:02:39 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:25.382 04:02:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:25.382 04:02:39 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:25.382 04:02:39 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.382 04:02:39 -- common/autotest_common.sh@641 -- # es=1 00:11:25.382 04:02:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:25.382 04:02:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:25.382 04:02:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:25.382 04:02:39 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:25.382 04:02:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:25.382 04:02:39 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:25.382 [ 0]:0x2 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # nguid=fd541d96a476497db19ac60c864f807a 00:11:25.382 04:02:39 -- target/ns_masking.sh@41 -- # [[ fd541d96a476497db19ac60c864f807a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.382 04:02:39 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:25.382 04:02:39 -- common/autotest_common.sh@638 -- # local es=0 00:11:25.382 04:02:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:25.382 04:02:39 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:25.382 04:02:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:25.382 04:02:39 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:25.382 04:02:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:25.382 04:02:39 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:25.382 04:02:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:25.382 04:02:39 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:25.382 04:02:39 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:25.382 04:02:39 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:25.382 [2024-04-19 04:02:39.857147] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:25.382 request: 00:11:25.382 { 00:11:25.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:25.382 "nsid": 2, 00:11:25.382 "host": "nqn.2016-06.io.spdk:host1", 00:11:25.382 "method": "nvmf_ns_remove_host", 00:11:25.382 "req_id": 1 00:11:25.382 } 00:11:25.382 Got JSON-RPC error response 00:11:25.382 response: 00:11:25.382 { 00:11:25.382 "code": -32602, 00:11:25.382 "message": "Invalid parameters" 00:11:25.382 } 00:11:25.382 04:02:39 -- common/autotest_common.sh@641 -- # es=1 00:11:25.382 04:02:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:25.382 04:02:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:25.382 04:02:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:25.382 04:02:39 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:25.382 04:02:39 -- common/autotest_common.sh@638 -- # local es=0 00:11:25.382 04:02:39 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:25.382 04:02:39 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:25.382 04:02:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:25.382 04:02:39 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:25.382 04:02:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:25.382 04:02:39 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:25.382 04:02:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:25.382 04:02:39 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:25.382 04:02:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:25.647 04:02:39 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:25.647 04:02:39 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.647 04:02:39 -- common/autotest_common.sh@641 -- # es=1 00:11:25.647 04:02:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:25.647 04:02:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:25.647 04:02:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:25.647 04:02:39 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:25.647 04:02:39 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:25.647 04:02:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:25.647 [ 0]:0x2 00:11:25.647 04:02:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:25.647 04:02:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:25.647 04:02:39 -- target/ns_masking.sh@40 -- # nguid=fd541d96a476497db19ac60c864f807a 00:11:25.647 04:02:39 -- target/ns_masking.sh@41 -- # [[ fd541d96a476497db19ac60c864f807a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:25.647 04:02:39 -- target/ns_masking.sh@108 -- # disconnect 00:11:25.647 04:02:39 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.916 04:02:40 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.199 04:02:40 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:26.199 04:02:40 -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:26.199 04:02:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:26.199 04:02:40 -- nvmf/common.sh@117 -- # sync 00:11:26.199 04:02:40 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:26.199 04:02:40 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:26.199 04:02:40 -- nvmf/common.sh@120 -- # set +e 00:11:26.199 04:02:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.199 04:02:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:26.199 rmmod nvme_rdma 00:11:26.199 rmmod nvme_fabrics 00:11:26.199 04:02:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.199 04:02:40 -- nvmf/common.sh@124 -- # set -e 00:11:26.199 04:02:40 -- nvmf/common.sh@125 -- # return 0 00:11:26.199 04:02:40 -- nvmf/common.sh@478 -- # '[' -n 228205 ']' 00:11:26.199 04:02:40 -- nvmf/common.sh@479 -- # killprocess 228205 00:11:26.199 04:02:40 -- common/autotest_common.sh@936 -- # '[' -z 228205 ']' 00:11:26.199 04:02:40 -- common/autotest_common.sh@940 -- # kill -0 228205 00:11:26.199 04:02:40 -- common/autotest_common.sh@941 -- # uname 00:11:26.199 04:02:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:26.199 04:02:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 228205 00:11:26.199 04:02:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:26.199 04:02:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:26.199 04:02:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 228205' 00:11:26.199 killing process with pid 228205 00:11:26.199 04:02:40 -- common/autotest_common.sh@955 -- # kill 228205 00:11:26.199 04:02:40 -- common/autotest_common.sh@960 -- # wait 228205 00:11:26.486 04:02:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:26.486 04:02:40 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:26.486 00:11:26.486 real 0m18.005s 00:11:26.486 user 0m54.391s 00:11:26.486 sys 0m5.081s 00:11:26.486 04:02:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:26.486 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.486 ************************************ 00:11:26.486 END TEST nvmf_ns_masking 00:11:26.486 ************************************ 00:11:26.486 04:02:40 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:26.486 04:02:40 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:26.486 04:02:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:26.486 04:02:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:26.486 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.754 ************************************ 00:11:26.754 START TEST nvmf_nvme_cli 00:11:26.754 ************************************ 00:11:26.754 04:02:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:26.754 * Looking for test storage... 00:11:26.754 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:26.754 04:02:41 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.754 04:02:41 -- nvmf/common.sh@7 -- # uname -s 00:11:26.754 04:02:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.754 04:02:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.754 04:02:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.754 04:02:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.754 04:02:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.754 04:02:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.754 04:02:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.754 04:02:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.754 04:02:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.754 04:02:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.754 04:02:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:26.754 04:02:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:26.754 04:02:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.754 04:02:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.754 04:02:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.754 04:02:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.754 04:02:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:26.754 04:02:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.754 04:02:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.754 04:02:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.754 04:02:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.754 04:02:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.754 04:02:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.754 04:02:41 -- paths/export.sh@5 -- # export PATH 00:11:26.754 04:02:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.754 04:02:41 -- nvmf/common.sh@47 -- # : 0 00:11:26.754 04:02:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.754 04:02:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.754 04:02:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.754 04:02:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.754 04:02:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.754 04:02:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.754 04:02:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.754 04:02:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.754 04:02:41 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:26.754 04:02:41 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:26.754 04:02:41 -- target/nvme_cli.sh@14 -- # devs=() 00:11:26.754 04:02:41 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:26.754 04:02:41 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:26.754 04:02:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.754 04:02:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:26.754 04:02:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:26.754 04:02:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:26.754 04:02:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.754 04:02:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.754 04:02:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.754 04:02:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:26.754 04:02:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:26.754 04:02:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.754 04:02:41 -- common/autotest_common.sh@10 -- # set +x 00:11:32.137 04:02:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:32.137 04:02:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:32.137 04:02:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:32.137 04:02:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:32.137 04:02:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:32.137 04:02:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:32.137 04:02:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:32.137 04:02:46 -- nvmf/common.sh@295 -- # net_devs=() 00:11:32.137 04:02:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:32.137 04:02:46 -- nvmf/common.sh@296 -- # e810=() 00:11:32.137 04:02:46 -- nvmf/common.sh@296 -- # local -ga e810 00:11:32.137 04:02:46 -- nvmf/common.sh@297 -- # x722=() 00:11:32.137 04:02:46 -- nvmf/common.sh@297 -- # local -ga x722 00:11:32.137 04:02:46 -- nvmf/common.sh@298 -- # mlx=() 00:11:32.137 04:02:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:32.137 04:02:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.137 04:02:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.137 04:02:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.137 04:02:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.137 04:02:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.137 04:02:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.137 04:02:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.137 04:02:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.137 04:02:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.137 04:02:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.137 04:02:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.138 04:02:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:32.138 04:02:46 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:32.138 04:02:46 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:32.138 04:02:46 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:32.138 04:02:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:32.138 04:02:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:32.138 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:32.138 04:02:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:32.138 04:02:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:32.138 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:32.138 04:02:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:32.138 04:02:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:32.138 04:02:46 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.138 04:02:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:32.138 04:02:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.138 04:02:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:32.138 Found net devices under 0000:18:00.0: mlx_0_0 00:11:32.138 04:02:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.138 04:02:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.138 04:02:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:32.138 04:02:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.138 04:02:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:32.138 Found net devices under 0000:18:00.1: mlx_0_1 00:11:32.138 04:02:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.138 04:02:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:32.138 04:02:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:32.138 04:02:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:32.138 04:02:46 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:32.138 04:02:46 -- nvmf/common.sh@58 -- # uname 00:11:32.138 04:02:46 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:32.138 04:02:46 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:32.138 04:02:46 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:32.138 04:02:46 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:32.138 04:02:46 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:32.138 04:02:46 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:32.138 04:02:46 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:32.138 04:02:46 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:32.138 04:02:46 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:32.138 04:02:46 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:32.138 04:02:46 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:32.138 04:02:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:32.138 04:02:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:32.138 04:02:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:32.138 04:02:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:32.138 04:02:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:32.138 04:02:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:32.138 04:02:46 -- nvmf/common.sh@105 -- # continue 2 00:11:32.138 04:02:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:32.138 04:02:46 -- nvmf/common.sh@105 -- # continue 2 00:11:32.138 04:02:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:32.138 04:02:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:32.138 04:02:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.138 04:02:46 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:32.138 04:02:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:32.138 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:32.138 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:32.138 altname enp24s0f0np0 00:11:32.138 altname ens785f0np0 00:11:32.138 inet 192.168.100.8/24 scope global mlx_0_0 00:11:32.138 valid_lft forever preferred_lft forever 00:11:32.138 04:02:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:32.138 04:02:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:32.138 04:02:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.138 04:02:46 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:32.138 04:02:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:32.138 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:32.138 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:32.138 altname enp24s0f1np1 00:11:32.138 altname ens785f1np1 00:11:32.138 inet 192.168.100.9/24 scope global mlx_0_1 00:11:32.138 valid_lft forever preferred_lft forever 00:11:32.138 04:02:46 -- nvmf/common.sh@411 -- # return 0 00:11:32.138 04:02:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:32.138 04:02:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:32.138 04:02:46 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:32.138 04:02:46 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:32.138 04:02:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:32.138 04:02:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:32.138 04:02:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:32.138 04:02:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:32.138 04:02:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:32.138 04:02:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:32.138 04:02:46 -- nvmf/common.sh@105 -- # continue 2 00:11:32.138 04:02:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.138 04:02:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:32.138 04:02:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:32.138 04:02:46 -- nvmf/common.sh@105 -- # continue 2 00:11:32.138 04:02:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:32.138 04:02:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:32.138 04:02:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.138 04:02:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:32.138 04:02:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:32.138 04:02:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:32.138 04:02:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.138 04:02:46 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:32.138 192.168.100.9' 00:11:32.138 04:02:46 -- nvmf/common.sh@446 -- # head -n 1 00:11:32.138 04:02:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:32.138 192.168.100.9' 00:11:32.138 04:02:46 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:32.138 04:02:46 -- nvmf/common.sh@447 -- # head -n 1 00:11:32.138 04:02:46 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:32.138 192.168.100.9' 00:11:32.138 04:02:46 -- nvmf/common.sh@447 -- # tail -n +2 00:11:32.138 04:02:46 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:32.138 04:02:46 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:32.138 04:02:46 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:32.138 04:02:46 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:32.138 04:02:46 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:32.138 04:02:46 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:32.413 04:02:46 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:32.413 04:02:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:32.413 04:02:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:32.413 04:02:46 -- common/autotest_common.sh@10 -- # set +x 00:11:32.413 04:02:46 -- nvmf/common.sh@470 -- # nvmfpid=233863 00:11:32.413 04:02:46 -- nvmf/common.sh@471 -- # waitforlisten 233863 00:11:32.413 04:02:46 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.413 04:02:46 -- common/autotest_common.sh@817 -- # '[' -z 233863 ']' 00:11:32.413 04:02:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.413 04:02:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:32.413 04:02:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.413 04:02:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:32.413 04:02:46 -- common/autotest_common.sh@10 -- # set +x 00:11:32.413 [2024-04-19 04:02:46.720293] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:11:32.413 [2024-04-19 04:02:46.720333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.413 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.413 [2024-04-19 04:02:46.769037] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.413 [2024-04-19 04:02:46.841408] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.413 [2024-04-19 04:02:46.841443] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.413 [2024-04-19 04:02:46.841450] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.413 [2024-04-19 04:02:46.841456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.413 [2024-04-19 04:02:46.841460] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.413 [2024-04-19 04:02:46.841496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.413 [2024-04-19 04:02:46.841578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.413 [2024-04-19 04:02:46.841663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.413 [2024-04-19 04:02:46.841664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.013 04:02:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:33.013 04:02:47 -- common/autotest_common.sh@850 -- # return 0 00:11:33.013 04:02:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:33.013 04:02:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:33.013 04:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.013 04:02:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.013 04:02:47 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:33.013 04:02:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:33.013 04:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 [2024-04-19 04:02:47.556174] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1af26c0/0x1af6bb0) succeed. 00:11:33.292 [2024-04-19 04:02:47.565540] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1af3cb0/0x1b38240) succeed. 00:11:33.292 04:02:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:33.292 04:02:47 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:33.292 04:02:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:33.292 04:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 Malloc0 00:11:33.292 04:02:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:33.292 04:02:47 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:33.292 04:02:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:33.292 04:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 Malloc1 00:11:33.292 04:02:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:33.292 04:02:47 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:33.292 04:02:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:33.292 04:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 04:02:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:33.292 04:02:47 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:33.292 04:02:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:33.292 04:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 04:02:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:33.292 04:02:47 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:33.292 04:02:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:33.292 04:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 04:02:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:33.292 04:02:47 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:33.292 04:02:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:33.292 04:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 [2024-04-19 04:02:47.746121] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:33.292 04:02:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:33.292 04:02:47 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:33.292 04:02:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:33.292 04:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:33.292 04:02:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:33.292 04:02:47 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:11:33.565 00:11:33.565 Discovery Log Number of Records 2, Generation counter 2 00:11:33.565 =====Discovery Log Entry 0====== 00:11:33.565 trtype: rdma 00:11:33.565 adrfam: ipv4 00:11:33.565 subtype: current discovery subsystem 00:11:33.565 treq: not required 00:11:33.565 portid: 0 00:11:33.565 trsvcid: 4420 00:11:33.565 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:33.565 traddr: 192.168.100.8 00:11:33.565 eflags: explicit discovery connections, duplicate discovery information 00:11:33.565 rdma_prtype: not specified 00:11:33.565 rdma_qptype: connected 00:11:33.565 rdma_cms: rdma-cm 00:11:33.565 rdma_pkey: 0x0000 00:11:33.565 =====Discovery Log Entry 1====== 00:11:33.565 trtype: rdma 00:11:33.565 adrfam: ipv4 00:11:33.565 subtype: nvme subsystem 00:11:33.565 treq: not required 00:11:33.565 portid: 0 00:11:33.565 trsvcid: 4420 00:11:33.565 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:33.565 traddr: 192.168.100.8 00:11:33.565 eflags: none 00:11:33.565 rdma_prtype: not specified 00:11:33.565 rdma_qptype: connected 00:11:33.565 rdma_cms: rdma-cm 00:11:33.565 rdma_pkey: 0x0000 00:11:33.565 04:02:47 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:33.565 04:02:47 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:33.565 04:02:47 -- nvmf/common.sh@511 -- # local dev _ 00:11:33.565 04:02:47 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:33.565 04:02:47 -- nvmf/common.sh@510 -- # nvme list 00:11:33.565 04:02:47 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:33.566 04:02:47 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:33.566 04:02:47 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:33.566 04:02:47 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:33.566 04:02:47 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:33.566 04:02:47 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:34.532 04:02:48 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:34.532 04:02:48 -- common/autotest_common.sh@1184 -- # local i=0 00:11:34.532 04:02:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.532 04:02:48 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:34.532 04:02:48 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:34.532 04:02:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:36.511 04:02:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:36.511 04:02:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:36.511 04:02:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.511 04:02:50 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:36.511 04:02:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.511 04:02:50 -- common/autotest_common.sh@1194 -- # return 0 00:11:36.511 04:02:50 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:36.511 04:02:50 -- nvmf/common.sh@511 -- # local dev _ 00:11:36.511 04:02:50 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:36.511 04:02:50 -- nvmf/common.sh@510 -- # nvme list 00:11:36.511 04:02:50 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:36.511 04:02:50 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:36.511 04:02:50 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:36.511 04:02:50 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:36.511 04:02:50 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:36.511 04:02:50 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:36.511 04:02:50 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:36.511 04:02:50 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:36.511 04:02:50 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:36.511 04:02:50 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:36.511 04:02:50 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:36.511 /dev/nvme0n1 ]] 00:11:36.511 04:02:50 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:36.511 04:02:50 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:36.511 04:02:50 -- nvmf/common.sh@511 -- # local dev _ 00:11:36.511 04:02:50 -- nvmf/common.sh@510 -- # nvme list 00:11:36.511 04:02:50 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:36.511 04:02:50 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:36.511 04:02:50 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:36.511 04:02:50 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:36.511 04:02:50 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:36.511 04:02:50 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:36.511 04:02:50 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:36.511 04:02:50 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:36.511 04:02:50 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:36.511 04:02:50 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:36.511 04:02:50 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:36.511 04:02:50 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:36.511 04:02:50 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.599 04:02:51 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.599 04:02:51 -- common/autotest_common.sh@1205 -- # local i=0 00:11:37.599 04:02:51 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:37.599 04:02:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.599 04:02:51 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:37.599 04:02:51 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.599 04:02:51 -- common/autotest_common.sh@1217 -- # return 0 00:11:37.599 04:02:51 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:37.599 04:02:51 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.599 04:02:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:37.599 04:02:51 -- common/autotest_common.sh@10 -- # set +x 00:11:37.599 04:02:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:37.599 04:02:51 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:37.599 04:02:51 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:37.599 04:02:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:37.599 04:02:51 -- nvmf/common.sh@117 -- # sync 00:11:37.599 04:02:51 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:37.599 04:02:51 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:37.599 04:02:51 -- nvmf/common.sh@120 -- # set +e 00:11:37.599 04:02:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:37.599 04:02:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:37.599 rmmod nvme_rdma 00:11:37.599 rmmod nvme_fabrics 00:11:37.599 04:02:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:37.599 04:02:51 -- nvmf/common.sh@124 -- # set -e 00:11:37.599 04:02:51 -- nvmf/common.sh@125 -- # return 0 00:11:37.599 04:02:51 -- nvmf/common.sh@478 -- # '[' -n 233863 ']' 00:11:37.599 04:02:51 -- nvmf/common.sh@479 -- # killprocess 233863 00:11:37.599 04:02:51 -- common/autotest_common.sh@936 -- # '[' -z 233863 ']' 00:11:37.599 04:02:51 -- common/autotest_common.sh@940 -- # kill -0 233863 00:11:37.599 04:02:51 -- common/autotest_common.sh@941 -- # uname 00:11:37.599 04:02:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:37.599 04:02:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 233863 00:11:37.599 04:02:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:37.599 04:02:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:37.599 04:02:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 233863' 00:11:37.599 killing process with pid 233863 00:11:37.599 04:02:52 -- common/autotest_common.sh@955 -- # kill 233863 00:11:37.599 04:02:52 -- common/autotest_common.sh@960 -- # wait 233863 00:11:37.883 04:02:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:37.883 04:02:52 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:37.883 00:11:37.883 real 0m11.273s 00:11:37.883 user 0m23.403s 00:11:37.883 sys 0m4.598s 00:11:37.883 04:02:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:37.883 04:02:52 -- common/autotest_common.sh@10 -- # set +x 00:11:37.883 ************************************ 00:11:37.883 END TEST nvmf_nvme_cli 00:11:37.883 ************************************ 00:11:37.883 04:02:52 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:11:37.883 04:02:52 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:11:37.883 04:02:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:37.883 04:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:37.883 04:02:52 -- common/autotest_common.sh@10 -- # set +x 00:11:38.158 ************************************ 00:11:38.158 START TEST nvmf_host_management 00:11:38.158 ************************************ 00:11:38.158 04:02:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:11:38.158 * Looking for test storage... 00:11:38.158 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:38.158 04:02:52 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.158 04:02:52 -- nvmf/common.sh@7 -- # uname -s 00:11:38.158 04:02:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.158 04:02:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.158 04:02:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.158 04:02:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.158 04:02:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.158 04:02:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.158 04:02:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.158 04:02:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.158 04:02:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.158 04:02:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.158 04:02:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:38.158 04:02:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:38.158 04:02:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.158 04:02:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.158 04:02:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.158 04:02:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.158 04:02:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:38.158 04:02:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.158 04:02:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.158 04:02:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.158 04:02:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.158 04:02:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.158 04:02:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.158 04:02:52 -- paths/export.sh@5 -- # export PATH 00:11:38.158 04:02:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.158 04:02:52 -- nvmf/common.sh@47 -- # : 0 00:11:38.158 04:02:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:38.158 04:02:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:38.158 04:02:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.158 04:02:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.158 04:02:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.158 04:02:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:38.158 04:02:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:38.158 04:02:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:38.158 04:02:52 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.158 04:02:52 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:38.158 04:02:52 -- target/host_management.sh@105 -- # nvmftestinit 00:11:38.158 04:02:52 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:38.158 04:02:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.158 04:02:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:38.158 04:02:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:38.158 04:02:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:38.158 04:02:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.158 04:02:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.158 04:02:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.158 04:02:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:38.158 04:02:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:38.158 04:02:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:38.158 04:02:52 -- common/autotest_common.sh@10 -- # set +x 00:11:43.436 04:02:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:43.436 04:02:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.436 04:02:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.436 04:02:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.436 04:02:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.436 04:02:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.436 04:02:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.436 04:02:57 -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.436 04:02:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.436 04:02:57 -- nvmf/common.sh@296 -- # e810=() 00:11:43.436 04:02:57 -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.436 04:02:57 -- nvmf/common.sh@297 -- # x722=() 00:11:43.436 04:02:57 -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.436 04:02:57 -- nvmf/common.sh@298 -- # mlx=() 00:11:43.436 04:02:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.436 04:02:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.436 04:02:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.436 04:02:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.436 04:02:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.436 04:02:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.436 04:02:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.436 04:02:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.436 04:02:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.436 04:02:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.436 04:02:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.436 04:02:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.436 04:02:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.436 04:02:57 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:43.436 04:02:57 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:43.437 04:02:57 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:43.437 04:02:57 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:43.437 04:02:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.437 04:02:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:43.437 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:43.437 04:02:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.437 04:02:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:43.437 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:43.437 04:02:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.437 04:02:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.437 04:02:57 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.437 04:02:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:43.437 04:02:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.437 04:02:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:43.437 Found net devices under 0000:18:00.0: mlx_0_0 00:11:43.437 04:02:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.437 04:02:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.437 04:02:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:43.437 04:02:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.437 04:02:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:43.437 Found net devices under 0000:18:00.1: mlx_0_1 00:11:43.437 04:02:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.437 04:02:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:43.437 04:02:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:43.437 04:02:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:43.437 04:02:57 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:43.437 04:02:57 -- nvmf/common.sh@58 -- # uname 00:11:43.437 04:02:57 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:43.437 04:02:57 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:43.437 04:02:57 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:43.437 04:02:57 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:43.437 04:02:57 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:43.437 04:02:57 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:43.437 04:02:57 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:43.437 04:02:57 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:43.437 04:02:57 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:43.437 04:02:57 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:43.437 04:02:57 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:43.437 04:02:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.437 04:02:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:43.437 04:02:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:43.437 04:02:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.437 04:02:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:43.437 04:02:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:43.437 04:02:57 -- nvmf/common.sh@105 -- # continue 2 00:11:43.437 04:02:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:43.437 04:02:57 -- nvmf/common.sh@105 -- # continue 2 00:11:43.437 04:02:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:43.437 04:02:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:43.437 04:02:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.437 04:02:57 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:43.437 04:02:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:43.437 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.437 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:43.437 altname enp24s0f0np0 00:11:43.437 altname ens785f0np0 00:11:43.437 inet 192.168.100.8/24 scope global mlx_0_0 00:11:43.437 valid_lft forever preferred_lft forever 00:11:43.437 04:02:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:43.437 04:02:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:43.437 04:02:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.437 04:02:57 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:43.437 04:02:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:43.437 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.437 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:43.437 altname enp24s0f1np1 00:11:43.437 altname ens785f1np1 00:11:43.437 inet 192.168.100.9/24 scope global mlx_0_1 00:11:43.437 valid_lft forever preferred_lft forever 00:11:43.437 04:02:57 -- nvmf/common.sh@411 -- # return 0 00:11:43.437 04:02:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:43.437 04:02:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:43.437 04:02:57 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:43.437 04:02:57 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:43.437 04:02:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.437 04:02:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:43.437 04:02:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:43.437 04:02:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.437 04:02:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:43.437 04:02:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:43.437 04:02:57 -- nvmf/common.sh@105 -- # continue 2 00:11:43.437 04:02:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.437 04:02:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.437 04:02:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:43.437 04:02:57 -- nvmf/common.sh@105 -- # continue 2 00:11:43.437 04:02:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:43.437 04:02:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:43.437 04:02:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.437 04:02:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:43.437 04:02:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:43.437 04:02:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.437 04:02:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.437 04:02:57 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:43.437 192.168.100.9' 00:11:43.437 04:02:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:43.437 192.168.100.9' 00:11:43.437 04:02:57 -- nvmf/common.sh@446 -- # head -n 1 00:11:43.437 04:02:57 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:43.437 04:02:57 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:43.437 192.168.100.9' 00:11:43.437 04:02:57 -- nvmf/common.sh@447 -- # head -n 1 00:11:43.437 04:02:57 -- nvmf/common.sh@447 -- # tail -n +2 00:11:43.437 04:02:57 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:43.437 04:02:57 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:43.437 04:02:57 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:43.437 04:02:57 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:43.437 04:02:57 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:43.437 04:02:57 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:43.437 04:02:57 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:11:43.437 04:02:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:43.437 04:02:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.437 04:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:43.437 ************************************ 00:11:43.437 START TEST nvmf_host_management 00:11:43.437 ************************************ 00:11:43.437 04:02:57 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:11:43.438 04:02:57 -- target/host_management.sh@69 -- # starttarget 00:11:43.438 04:02:57 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:43.438 04:02:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:43.438 04:02:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:43.438 04:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:43.438 04:02:57 -- nvmf/common.sh@470 -- # nvmfpid=238225 00:11:43.438 04:02:57 -- nvmf/common.sh@471 -- # waitforlisten 238225 00:11:43.438 04:02:57 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:43.438 04:02:57 -- common/autotest_common.sh@817 -- # '[' -z 238225 ']' 00:11:43.438 04:02:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.438 04:02:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:43.438 04:02:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.438 04:02:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:43.438 04:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:43.438 [2024-04-19 04:02:57.845253] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:11:43.438 [2024-04-19 04:02:57.845291] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.438 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.438 [2024-04-19 04:02:57.894925] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.697 [2024-04-19 04:02:57.964955] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.697 [2024-04-19 04:02:57.964991] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.697 [2024-04-19 04:02:57.964998] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.697 [2024-04-19 04:02:57.965003] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.697 [2024-04-19 04:02:57.965008] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.697 [2024-04-19 04:02:57.965052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.697 [2024-04-19 04:02:57.965068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.697 [2024-04-19 04:02:57.965180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.697 [2024-04-19 04:02:57.965181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:44.267 04:02:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:44.267 04:02:58 -- common/autotest_common.sh@850 -- # return 0 00:11:44.267 04:02:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:44.267 04:02:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:44.267 04:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.267 04:02:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.267 04:02:58 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:44.267 04:02:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:44.267 04:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.267 [2024-04-19 04:02:58.680396] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5f89b0/0x5fcea0) succeed. 00:11:44.267 [2024-04-19 04:02:58.689607] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5f9fa0/0x63e530) succeed. 00:11:44.526 04:02:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:44.526 04:02:58 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:44.526 04:02:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:44.526 04:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.526 04:02:58 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:44.526 04:02:58 -- target/host_management.sh@23 -- # cat 00:11:44.526 04:02:58 -- target/host_management.sh@30 -- # rpc_cmd 00:11:44.526 04:02:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:44.526 04:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.526 Malloc0 00:11:44.526 [2024-04-19 04:02:58.852704] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:44.526 04:02:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:44.526 04:02:58 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:44.527 04:02:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:44.527 04:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.527 04:02:58 -- target/host_management.sh@73 -- # perfpid=238521 00:11:44.527 04:02:58 -- target/host_management.sh@74 -- # waitforlisten 238521 /var/tmp/bdevperf.sock 00:11:44.527 04:02:58 -- common/autotest_common.sh@817 -- # '[' -z 238521 ']' 00:11:44.527 04:02:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:44.527 04:02:58 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:44.527 04:02:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:44.527 04:02:58 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:44.527 04:02:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:44.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:44.527 04:02:58 -- nvmf/common.sh@521 -- # config=() 00:11:44.527 04:02:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:44.527 04:02:58 -- nvmf/common.sh@521 -- # local subsystem config 00:11:44.527 04:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.527 04:02:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:44.527 04:02:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:44.527 { 00:11:44.527 "params": { 00:11:44.527 "name": "Nvme$subsystem", 00:11:44.527 "trtype": "$TEST_TRANSPORT", 00:11:44.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:44.527 "adrfam": "ipv4", 00:11:44.527 "trsvcid": "$NVMF_PORT", 00:11:44.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:44.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:44.527 "hdgst": ${hdgst:-false}, 00:11:44.527 "ddgst": ${ddgst:-false} 00:11:44.527 }, 00:11:44.527 "method": "bdev_nvme_attach_controller" 00:11:44.527 } 00:11:44.527 EOF 00:11:44.527 )") 00:11:44.527 04:02:58 -- nvmf/common.sh@543 -- # cat 00:11:44.527 04:02:58 -- nvmf/common.sh@545 -- # jq . 00:11:44.527 04:02:58 -- nvmf/common.sh@546 -- # IFS=, 00:11:44.527 04:02:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:44.527 "params": { 00:11:44.527 "name": "Nvme0", 00:11:44.527 "trtype": "rdma", 00:11:44.527 "traddr": "192.168.100.8", 00:11:44.527 "adrfam": "ipv4", 00:11:44.527 "trsvcid": "4420", 00:11:44.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:44.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:44.527 "hdgst": false, 00:11:44.527 "ddgst": false 00:11:44.527 }, 00:11:44.527 "method": "bdev_nvme_attach_controller" 00:11:44.527 }' 00:11:44.527 [2024-04-19 04:02:58.939584] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:11:44.527 [2024-04-19 04:02:58.939625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238521 ] 00:11:44.527 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.527 [2024-04-19 04:02:58.990863] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.787 [2024-04-19 04:02:59.058479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.787 Running I/O for 10 seconds... 00:11:45.356 04:02:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:45.356 04:02:59 -- common/autotest_common.sh@850 -- # return 0 00:11:45.356 04:02:59 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:45.356 04:02:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:45.356 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.356 04:02:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:45.356 04:02:59 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:45.356 04:02:59 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:45.356 04:02:59 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:45.356 04:02:59 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:45.356 04:02:59 -- target/host_management.sh@52 -- # local ret=1 00:11:45.356 04:02:59 -- target/host_management.sh@53 -- # local i 00:11:45.356 04:02:59 -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:45.356 04:02:59 -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:45.356 04:02:59 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:45.356 04:02:59 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:45.356 04:02:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:45.356 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.356 04:02:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:45.356 04:02:59 -- target/host_management.sh@55 -- # read_io_count=1411 00:11:45.356 04:02:59 -- target/host_management.sh@58 -- # '[' 1411 -ge 100 ']' 00:11:45.356 04:02:59 -- target/host_management.sh@59 -- # ret=0 00:11:45.356 04:02:59 -- target/host_management.sh@60 -- # break 00:11:45.356 04:02:59 -- target/host_management.sh@64 -- # return 0 00:11:45.356 04:02:59 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:45.356 04:02:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:45.356 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.356 04:02:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:45.356 04:02:59 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:45.356 04:02:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:45.356 04:02:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.356 04:02:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:45.356 04:02:59 -- target/host_management.sh@87 -- # sleep 1 00:11:46.296 [2024-04-19 04:03:00.809624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182600 00:11:46.296 [2024-04-19 04:03:00.809664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.296 [2024-04-19 04:03:00.809684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182600 00:11:46.296 [2024-04-19 04:03:00.809691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.296 [2024-04-19 04:03:00.809699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182500 00:11:46.296 [2024-04-19 04:03:00.809705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.296 [2024-04-19 04:03:00.809714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182500 00:11:46.296 [2024-04-19 04:03:00.809719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.296 [2024-04-19 04:03:00.809727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182500 00:11:46.296 [2024-04-19 04:03:00.809733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.296 [2024-04-19 04:03:00.809741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182500 00:11:46.296 [2024-04-19 04:03:00.809747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.296 [2024-04-19 04:03:00.809754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182500 00:11:46.296 [2024-04-19 04:03:00.809760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.296 [2024-04-19 04:03:00.809768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182500 00:11:46.297 [2024-04-19 04:03:00.809774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182500 00:11:46.297 [2024-04-19 04:03:00.809787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182500 00:11:46.297 [2024-04-19 04:03:00.809801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:11:46.297 [2024-04-19 04:03:00.809818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182500 00:11:46.297 [2024-04-19 04:03:00.809832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182500 00:11:46.297 [2024-04-19 04:03:00.809846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182500 00:11:46.297 [2024-04-19 04:03:00.809859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182500 00:11:46.297 [2024-04-19 04:03:00.809873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182500 00:11:46.297 [2024-04-19 04:03:00.809887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182500 00:11:46.297 [2024-04-19 04:03:00.809900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4a780 len:0x10000 key:0x182000 00:11:46.297 [2024-04-19 04:03:00.809914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3a700 len:0x10000 key:0x182000 00:11:46.297 [2024-04-19 04:03:00.809927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2a680 len:0x10000 key:0x182000 00:11:46.297 [2024-04-19 04:03:00.809941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1a600 len:0x10000 key:0x182000 00:11:46.297 [2024-04-19 04:03:00.809955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0a580 len:0x10000 key:0x182000 00:11:46.297 [2024-04-19 04:03:00.809969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182400 00:11:46.297 [2024-04-19 04:03:00.809984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.809991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134c7000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.809997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b718000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6f7000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be71000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be50000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4a1000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c480000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cad1000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cab0000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce6d000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce4c000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce2b000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cde9000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cdc8000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cda7000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd86000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.297 [2024-04-19 04:03:00.810250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd65000 len:0x10000 key:0x182300 00:11:46.297 [2024-04-19 04:03:00.810256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd44000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd23000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd02000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cce1000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ccc0000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0bf000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d09e000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d07d000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d05c000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d03b000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d01a000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cff9000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf96000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf75000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf54000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf33000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf12000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cef1000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ced0000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2cf000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.810537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2ae000 len:0x10000 key:0x182300 00:11:46.298 [2024-04-19 04:03:00.810543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:18c0 p:0 m:0 dnr:0 00:11:46.298 [2024-04-19 04:03:00.812327] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:11:46.298 [2024-04-19 04:03:00.813182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:46.298 task offset: 70784 on job bdev=Nvme0n1 fails 00:11:46.298 00:11:46.298 Latency(us) 00:11:46.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.298 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:46.298 Job: Nvme0n1 ended in about 1.57 seconds with error 00:11:46.298 Verification LBA range: start 0x0 length 0x400 00:11:46.298 Nvme0n1 : 1.57 977.31 61.08 40.72 0.00 62271.83 1941.81 1012846.74 00:11:46.298 =================================================================================================================== 00:11:46.298 Total : 977.31 61.08 40.72 0.00 62271.83 1941.81 1012846.74 00:11:46.298 [2024-04-19 04:03:00.814931] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:46.298 04:03:00 -- target/host_management.sh@91 -- # kill -9 238521 00:11:46.298 04:03:00 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:46.298 04:03:00 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:46.298 04:03:00 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:46.298 04:03:00 -- nvmf/common.sh@521 -- # config=() 00:11:46.298 04:03:00 -- nvmf/common.sh@521 -- # local subsystem config 00:11:46.298 04:03:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:46.298 04:03:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:46.298 { 00:11:46.298 "params": { 00:11:46.298 "name": "Nvme$subsystem", 00:11:46.298 "trtype": "$TEST_TRANSPORT", 00:11:46.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:46.298 "adrfam": "ipv4", 00:11:46.298 "trsvcid": "$NVMF_PORT", 00:11:46.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:46.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:46.298 "hdgst": ${hdgst:-false}, 00:11:46.298 "ddgst": ${ddgst:-false} 00:11:46.298 }, 00:11:46.298 "method": "bdev_nvme_attach_controller" 00:11:46.298 } 00:11:46.298 EOF 00:11:46.298 )") 00:11:46.298 04:03:00 -- nvmf/common.sh@543 -- # cat 00:11:46.558 04:03:00 -- nvmf/common.sh@545 -- # jq . 00:11:46.558 04:03:00 -- nvmf/common.sh@546 -- # IFS=, 00:11:46.558 04:03:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:46.558 "params": { 00:11:46.558 "name": "Nvme0", 00:11:46.558 "trtype": "rdma", 00:11:46.558 "traddr": "192.168.100.8", 00:11:46.558 "adrfam": "ipv4", 00:11:46.558 "trsvcid": "4420", 00:11:46.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:46.558 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:46.558 "hdgst": false, 00:11:46.558 "ddgst": false 00:11:46.558 }, 00:11:46.558 "method": "bdev_nvme_attach_controller" 00:11:46.558 }' 00:11:46.558 [2024-04-19 04:03:00.859151] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:11:46.558 [2024-04-19 04:03:00.859196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238863 ] 00:11:46.558 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.558 [2024-04-19 04:03:00.909100] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.558 [2024-04-19 04:03:00.976791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.818 Running I/O for 1 seconds... 00:11:47.757 00:11:47.757 Latency(us) 00:11:47.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.757 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:47.757 Verification LBA range: start 0x0 length 0x400 00:11:47.757 Nvme0n1 : 1.01 3329.68 208.10 0.00 0.00 18841.30 631.09 26408.58 00:11:47.757 =================================================================================================================== 00:11:47.757 Total : 3329.68 208.10 0.00 0.00 18841.30 631.09 26408.58 00:11:48.017 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 238521 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:11:48.017 04:03:02 -- target/host_management.sh@102 -- # stoptarget 00:11:48.017 04:03:02 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:48.017 04:03:02 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:48.017 04:03:02 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:48.017 04:03:02 -- target/host_management.sh@40 -- # nvmftestfini 00:11:48.017 04:03:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:48.017 04:03:02 -- nvmf/common.sh@117 -- # sync 00:11:48.017 04:03:02 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:48.017 04:03:02 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:48.017 04:03:02 -- nvmf/common.sh@120 -- # set +e 00:11:48.017 04:03:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.017 04:03:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:48.017 rmmod nvme_rdma 00:11:48.017 rmmod nvme_fabrics 00:11:48.017 04:03:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.017 04:03:02 -- nvmf/common.sh@124 -- # set -e 00:11:48.017 04:03:02 -- nvmf/common.sh@125 -- # return 0 00:11:48.017 04:03:02 -- nvmf/common.sh@478 -- # '[' -n 238225 ']' 00:11:48.017 04:03:02 -- nvmf/common.sh@479 -- # killprocess 238225 00:11:48.017 04:03:02 -- common/autotest_common.sh@936 -- # '[' -z 238225 ']' 00:11:48.017 04:03:02 -- common/autotest_common.sh@940 -- # kill -0 238225 00:11:48.017 04:03:02 -- common/autotest_common.sh@941 -- # uname 00:11:48.017 04:03:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.017 04:03:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 238225 00:11:48.017 04:03:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:48.017 04:03:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:48.017 04:03:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 238225' 00:11:48.017 killing process with pid 238225 00:11:48.017 04:03:02 -- common/autotest_common.sh@955 -- # kill 238225 00:11:48.017 04:03:02 -- common/autotest_common.sh@960 -- # wait 238225 00:11:48.277 [2024-04-19 04:03:02.748807] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:48.277 04:03:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:48.277 04:03:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:48.277 00:11:48.277 real 0m4.976s 00:11:48.277 user 0m22.420s 00:11:48.277 sys 0m0.789s 00:11:48.277 04:03:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.277 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.277 ************************************ 00:11:48.277 END TEST nvmf_host_management 00:11:48.277 ************************************ 00:11:48.277 04:03:02 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:48.537 00:11:48.537 real 0m10.291s 00:11:48.537 user 0m23.896s 00:11:48.537 sys 0m4.605s 00:11:48.537 04:03:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.537 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.537 ************************************ 00:11:48.537 END TEST nvmf_host_management 00:11:48.537 ************************************ 00:11:48.537 04:03:02 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:11:48.537 04:03:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:48.537 04:03:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.537 04:03:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.537 ************************************ 00:11:48.537 START TEST nvmf_lvol 00:11:48.537 ************************************ 00:11:48.537 04:03:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:11:48.537 * Looking for test storage... 00:11:48.537 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:48.537 04:03:03 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.805 04:03:03 -- nvmf/common.sh@7 -- # uname -s 00:11:48.805 04:03:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.805 04:03:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.805 04:03:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.805 04:03:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.805 04:03:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.805 04:03:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.805 04:03:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.805 04:03:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.805 04:03:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.805 04:03:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.805 04:03:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:48.805 04:03:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:48.805 04:03:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.805 04:03:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.805 04:03:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.805 04:03:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.805 04:03:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:48.805 04:03:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.805 04:03:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.805 04:03:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.805 04:03:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.805 04:03:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.805 04:03:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.805 04:03:03 -- paths/export.sh@5 -- # export PATH 00:11:48.805 04:03:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.805 04:03:03 -- nvmf/common.sh@47 -- # : 0 00:11:48.805 04:03:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.805 04:03:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.805 04:03:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.805 04:03:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.805 04:03:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.805 04:03:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.805 04:03:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.805 04:03:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.805 04:03:03 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.805 04:03:03 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:48.805 04:03:03 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:48.805 04:03:03 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:48.805 04:03:03 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:48.805 04:03:03 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:48.805 04:03:03 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:48.805 04:03:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.805 04:03:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:48.805 04:03:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:48.805 04:03:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:48.805 04:03:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.805 04:03:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.805 04:03:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.805 04:03:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:48.805 04:03:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:48.805 04:03:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.805 04:03:03 -- common/autotest_common.sh@10 -- # set +x 00:11:54.084 04:03:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:54.084 04:03:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:54.084 04:03:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:54.084 04:03:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:54.084 04:03:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:54.084 04:03:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:54.085 04:03:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:54.085 04:03:08 -- nvmf/common.sh@295 -- # net_devs=() 00:11:54.085 04:03:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:54.085 04:03:08 -- nvmf/common.sh@296 -- # e810=() 00:11:54.085 04:03:08 -- nvmf/common.sh@296 -- # local -ga e810 00:11:54.085 04:03:08 -- nvmf/common.sh@297 -- # x722=() 00:11:54.085 04:03:08 -- nvmf/common.sh@297 -- # local -ga x722 00:11:54.085 04:03:08 -- nvmf/common.sh@298 -- # mlx=() 00:11:54.085 04:03:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:54.085 04:03:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.085 04:03:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.085 04:03:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.085 04:03:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.085 04:03:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.085 04:03:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.085 04:03:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.085 04:03:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.085 04:03:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.085 04:03:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.085 04:03:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.085 04:03:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:54.085 04:03:08 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:54.085 04:03:08 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:54.085 04:03:08 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:54.085 04:03:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:54.085 04:03:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:54.085 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:54.085 04:03:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:54.085 04:03:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:54.085 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:54.085 04:03:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:54.085 04:03:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:54.085 04:03:08 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.085 04:03:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:54.085 04:03:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.085 04:03:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:54.085 Found net devices under 0000:18:00.0: mlx_0_0 00:11:54.085 04:03:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.085 04:03:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.085 04:03:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:54.085 04:03:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.085 04:03:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:54.085 Found net devices under 0000:18:00.1: mlx_0_1 00:11:54.085 04:03:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.085 04:03:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:54.085 04:03:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:54.085 04:03:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:54.085 04:03:08 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:54.085 04:03:08 -- nvmf/common.sh@58 -- # uname 00:11:54.085 04:03:08 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:54.085 04:03:08 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:54.085 04:03:08 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:54.085 04:03:08 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:54.085 04:03:08 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:54.085 04:03:08 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:54.085 04:03:08 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:54.085 04:03:08 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:54.085 04:03:08 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:54.085 04:03:08 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:54.085 04:03:08 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:54.085 04:03:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:54.085 04:03:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:54.085 04:03:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:54.085 04:03:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:54.085 04:03:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:54.085 04:03:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:54.085 04:03:08 -- nvmf/common.sh@105 -- # continue 2 00:11:54.085 04:03:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:54.085 04:03:08 -- nvmf/common.sh@105 -- # continue 2 00:11:54.085 04:03:08 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:54.085 04:03:08 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:54.085 04:03:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:54.085 04:03:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:54.085 04:03:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:54.085 04:03:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:54.085 04:03:08 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:54.085 04:03:08 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:54.085 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:54.085 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:54.085 altname enp24s0f0np0 00:11:54.085 altname ens785f0np0 00:11:54.085 inet 192.168.100.8/24 scope global mlx_0_0 00:11:54.085 valid_lft forever preferred_lft forever 00:11:54.085 04:03:08 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:54.085 04:03:08 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:54.085 04:03:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:54.085 04:03:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:54.085 04:03:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:54.085 04:03:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:54.085 04:03:08 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:54.085 04:03:08 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:54.085 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:54.085 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:54.085 altname enp24s0f1np1 00:11:54.085 altname ens785f1np1 00:11:54.085 inet 192.168.100.9/24 scope global mlx_0_1 00:11:54.085 valid_lft forever preferred_lft forever 00:11:54.085 04:03:08 -- nvmf/common.sh@411 -- # return 0 00:11:54.085 04:03:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:54.085 04:03:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:54.085 04:03:08 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:54.085 04:03:08 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:54.085 04:03:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:54.085 04:03:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:54.085 04:03:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:54.085 04:03:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:54.085 04:03:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:54.085 04:03:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:54.085 04:03:08 -- nvmf/common.sh@105 -- # continue 2 00:11:54.085 04:03:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:54.085 04:03:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:54.085 04:03:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:54.085 04:03:08 -- nvmf/common.sh@105 -- # continue 2 00:11:54.085 04:03:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:54.085 04:03:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:54.085 04:03:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:54.085 04:03:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:54.086 04:03:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:54.086 04:03:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:54.086 04:03:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:54.086 04:03:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:54.086 04:03:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:54.086 04:03:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:54.086 04:03:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:54.086 04:03:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:54.086 04:03:08 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:54.086 192.168.100.9' 00:11:54.086 04:03:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:54.086 192.168.100.9' 00:11:54.086 04:03:08 -- nvmf/common.sh@446 -- # head -n 1 00:11:54.086 04:03:08 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:54.086 04:03:08 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:54.086 192.168.100.9' 00:11:54.086 04:03:08 -- nvmf/common.sh@447 -- # tail -n +2 00:11:54.086 04:03:08 -- nvmf/common.sh@447 -- # head -n 1 00:11:54.086 04:03:08 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:54.086 04:03:08 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:54.086 04:03:08 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:54.086 04:03:08 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:54.086 04:03:08 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:54.086 04:03:08 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:54.086 04:03:08 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:54.086 04:03:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:54.086 04:03:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:54.086 04:03:08 -- common/autotest_common.sh@10 -- # set +x 00:11:54.086 04:03:08 -- nvmf/common.sh@470 -- # nvmfpid=243107 00:11:54.086 04:03:08 -- nvmf/common.sh@471 -- # waitforlisten 243107 00:11:54.086 04:03:08 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:54.086 04:03:08 -- common/autotest_common.sh@817 -- # '[' -z 243107 ']' 00:11:54.086 04:03:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.086 04:03:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:54.086 04:03:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.086 04:03:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:54.086 04:03:08 -- common/autotest_common.sh@10 -- # set +x 00:11:54.346 [2024-04-19 04:03:08.637351] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:11:54.346 [2024-04-19 04:03:08.637408] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.346 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.346 [2024-04-19 04:03:08.690542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:54.346 [2024-04-19 04:03:08.764762] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.346 [2024-04-19 04:03:08.764802] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.346 [2024-04-19 04:03:08.764808] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.346 [2024-04-19 04:03:08.764814] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.346 [2024-04-19 04:03:08.764818] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.346 [2024-04-19 04:03:08.764863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.346 [2024-04-19 04:03:08.764956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.346 [2024-04-19 04:03:08.764958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.915 04:03:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:54.915 04:03:09 -- common/autotest_common.sh@850 -- # return 0 00:11:54.915 04:03:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:54.915 04:03:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:54.915 04:03:09 -- common/autotest_common.sh@10 -- # set +x 00:11:54.915 04:03:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.915 04:03:09 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:55.174 [2024-04-19 04:03:09.598751] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fe9be0/0x1fee0d0) succeed. 00:11:55.174 [2024-04-19 04:03:09.607711] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1feb130/0x202f760) succeed. 00:11:55.433 04:03:09 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:55.433 04:03:09 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:55.433 04:03:09 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:55.692 04:03:10 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:55.692 04:03:10 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:55.952 04:03:10 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:55.952 04:03:10 -- target/nvmf_lvol.sh@29 -- # lvs=fd56dd15-f729-4020-b4e2-80363544855e 00:11:55.952 04:03:10 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fd56dd15-f729-4020-b4e2-80363544855e lvol 20 00:11:56.223 04:03:10 -- target/nvmf_lvol.sh@32 -- # lvol=5e1ce0ad-7663-4e7b-a89f-d0cef04eb8f2 00:11:56.223 04:03:10 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:56.223 04:03:10 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5e1ce0ad-7663-4e7b-a89f-d0cef04eb8f2 00:11:56.484 04:03:10 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:56.743 [2024-04-19 04:03:11.027439] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:56.743 04:03:11 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:56.743 04:03:11 -- target/nvmf_lvol.sh@42 -- # perf_pid=243527 00:11:56.743 04:03:11 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:56.743 04:03:11 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:56.743 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.123 04:03:12 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5e1ce0ad-7663-4e7b-a89f-d0cef04eb8f2 MY_SNAPSHOT 00:11:58.123 04:03:12 -- target/nvmf_lvol.sh@47 -- # snapshot=97c50be7-6f79-476d-9de8-407512b0fafa 00:11:58.123 04:03:12 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5e1ce0ad-7663-4e7b-a89f-d0cef04eb8f2 30 00:11:58.123 04:03:12 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 97c50be7-6f79-476d-9de8-407512b0fafa MY_CLONE 00:11:58.382 04:03:12 -- target/nvmf_lvol.sh@49 -- # clone=d735e9c0-4ad3-43f1-9758-691af88f0836 00:11:58.382 04:03:12 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d735e9c0-4ad3-43f1-9758-691af88f0836 00:11:58.642 04:03:12 -- target/nvmf_lvol.sh@53 -- # wait 243527 00:12:08.628 Initializing NVMe Controllers 00:12:08.629 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:12:08.629 Controller IO queue size 128, less than required. 00:12:08.629 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:08.629 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:08.629 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:08.629 Initialization complete. Launching workers. 00:12:08.629 ======================================================== 00:12:08.629 Latency(us) 00:12:08.629 Device Information : IOPS MiB/s Average min max 00:12:08.629 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17478.00 68.27 7325.52 1440.94 45008.94 00:12:08.629 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17389.30 67.93 7362.37 2740.50 43281.55 00:12:08.629 ======================================================== 00:12:08.629 Total : 34867.30 136.20 7343.90 1440.94 45008.94 00:12:08.629 00:12:08.629 04:03:22 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:08.629 04:03:22 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5e1ce0ad-7663-4e7b-a89f-d0cef04eb8f2 00:12:08.629 04:03:22 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd56dd15-f729-4020-b4e2-80363544855e 00:12:08.629 04:03:23 -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:08.629 04:03:23 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:08.629 04:03:23 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:08.629 04:03:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:08.629 04:03:23 -- nvmf/common.sh@117 -- # sync 00:12:08.629 04:03:23 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:08.629 04:03:23 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:08.629 04:03:23 -- nvmf/common.sh@120 -- # set +e 00:12:08.629 04:03:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:08.629 04:03:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:08.629 rmmod nvme_rdma 00:12:08.629 rmmod nvme_fabrics 00:12:08.629 04:03:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:08.629 04:03:23 -- nvmf/common.sh@124 -- # set -e 00:12:08.629 04:03:23 -- nvmf/common.sh@125 -- # return 0 00:12:08.629 04:03:23 -- nvmf/common.sh@478 -- # '[' -n 243107 ']' 00:12:08.629 04:03:23 -- nvmf/common.sh@479 -- # killprocess 243107 00:12:08.629 04:03:23 -- common/autotest_common.sh@936 -- # '[' -z 243107 ']' 00:12:08.629 04:03:23 -- common/autotest_common.sh@940 -- # kill -0 243107 00:12:08.629 04:03:23 -- common/autotest_common.sh@941 -- # uname 00:12:08.629 04:03:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:08.629 04:03:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 243107 00:12:08.629 04:03:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:08.629 04:03:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:08.629 04:03:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 243107' 00:12:08.629 killing process with pid 243107 00:12:08.629 04:03:23 -- common/autotest_common.sh@955 -- # kill 243107 00:12:08.629 04:03:23 -- common/autotest_common.sh@960 -- # wait 243107 00:12:09.198 04:03:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:09.198 04:03:23 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:09.198 00:12:09.198 real 0m20.467s 00:12:09.198 user 1m9.846s 00:12:09.198 sys 0m5.147s 00:12:09.198 04:03:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:09.198 04:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:09.198 ************************************ 00:12:09.198 END TEST nvmf_lvol 00:12:09.198 ************************************ 00:12:09.198 04:03:23 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:09.198 04:03:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:09.198 04:03:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.198 04:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:09.198 ************************************ 00:12:09.198 START TEST nvmf_lvs_grow 00:12:09.198 ************************************ 00:12:09.198 04:03:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:09.198 * Looking for test storage... 00:12:09.198 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:09.198 04:03:23 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.198 04:03:23 -- nvmf/common.sh@7 -- # uname -s 00:12:09.198 04:03:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.198 04:03:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.198 04:03:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.198 04:03:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.198 04:03:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.198 04:03:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.198 04:03:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.198 04:03:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.198 04:03:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.198 04:03:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.198 04:03:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:09.198 04:03:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:12:09.198 04:03:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.198 04:03:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.198 04:03:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.198 04:03:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.198 04:03:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:09.198 04:03:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.198 04:03:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.198 04:03:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.198 04:03:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.198 04:03:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.198 04:03:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.198 04:03:23 -- paths/export.sh@5 -- # export PATH 00:12:09.199 04:03:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.199 04:03:23 -- nvmf/common.sh@47 -- # : 0 00:12:09.199 04:03:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.199 04:03:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.199 04:03:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.199 04:03:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.199 04:03:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.199 04:03:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.199 04:03:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.199 04:03:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.199 04:03:23 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:09.199 04:03:23 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:09.199 04:03:23 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:12:09.199 04:03:23 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:09.199 04:03:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.199 04:03:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:09.199 04:03:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:09.199 04:03:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:09.199 04:03:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.199 04:03:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.199 04:03:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.199 04:03:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:09.199 04:03:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:09.199 04:03:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:09.199 04:03:23 -- common/autotest_common.sh@10 -- # set +x 00:12:15.779 04:03:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:15.779 04:03:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.779 04:03:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.779 04:03:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.779 04:03:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.779 04:03:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.779 04:03:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.779 04:03:29 -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.779 04:03:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.779 04:03:29 -- nvmf/common.sh@296 -- # e810=() 00:12:15.779 04:03:29 -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.779 04:03:29 -- nvmf/common.sh@297 -- # x722=() 00:12:15.779 04:03:29 -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.779 04:03:29 -- nvmf/common.sh@298 -- # mlx=() 00:12:15.779 04:03:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.779 04:03:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.779 04:03:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.779 04:03:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.779 04:03:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.779 04:03:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.779 04:03:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.779 04:03:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.779 04:03:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.779 04:03:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.779 04:03:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.779 04:03:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.779 04:03:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.779 04:03:29 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:15.779 04:03:29 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:15.779 04:03:29 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:15.779 04:03:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.779 04:03:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.779 04:03:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:15.779 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:15.779 04:03:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:15.779 04:03:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.779 04:03:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:15.779 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:15.779 04:03:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:15.779 04:03:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.779 04:03:29 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.779 04:03:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.779 04:03:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:15.779 04:03:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.779 04:03:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:15.779 Found net devices under 0000:18:00.0: mlx_0_0 00:12:15.779 04:03:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.779 04:03:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.779 04:03:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.779 04:03:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:15.779 04:03:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.779 04:03:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:15.779 Found net devices under 0000:18:00.1: mlx_0_1 00:12:15.779 04:03:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.779 04:03:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:15.779 04:03:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:15.779 04:03:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:15.779 04:03:29 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:15.779 04:03:29 -- nvmf/common.sh@58 -- # uname 00:12:15.779 04:03:29 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:15.779 04:03:29 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:15.779 04:03:29 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:15.779 04:03:29 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:15.779 04:03:29 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:15.779 04:03:29 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:15.779 04:03:29 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:15.779 04:03:29 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:15.779 04:03:29 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:15.779 04:03:29 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:15.779 04:03:29 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:15.779 04:03:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:15.779 04:03:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:15.779 04:03:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:15.779 04:03:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:15.779 04:03:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:15.779 04:03:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:15.779 04:03:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.779 04:03:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:15.779 04:03:29 -- nvmf/common.sh@105 -- # continue 2 00:12:15.779 04:03:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:15.779 04:03:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.779 04:03:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.779 04:03:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:15.779 04:03:29 -- nvmf/common.sh@105 -- # continue 2 00:12:15.779 04:03:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:15.779 04:03:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:15.779 04:03:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:15.779 04:03:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:15.779 04:03:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:15.779 04:03:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:15.779 04:03:29 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:15.779 04:03:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:15.779 04:03:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:15.779 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:15.779 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:12:15.779 altname enp24s0f0np0 00:12:15.779 altname ens785f0np0 00:12:15.780 inet 192.168.100.8/24 scope global mlx_0_0 00:12:15.780 valid_lft forever preferred_lft forever 00:12:15.780 04:03:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:15.780 04:03:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:15.780 04:03:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:15.780 04:03:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:15.780 04:03:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:15.780 04:03:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:15.780 04:03:29 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:15.780 04:03:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:15.780 04:03:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:15.780 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:15.780 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:12:15.780 altname enp24s0f1np1 00:12:15.780 altname ens785f1np1 00:12:15.780 inet 192.168.100.9/24 scope global mlx_0_1 00:12:15.780 valid_lft forever preferred_lft forever 00:12:15.780 04:03:29 -- nvmf/common.sh@411 -- # return 0 00:12:15.780 04:03:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:15.780 04:03:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:15.780 04:03:29 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:15.780 04:03:29 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:15.780 04:03:29 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:15.780 04:03:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:15.780 04:03:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:15.780 04:03:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:15.780 04:03:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:15.780 04:03:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:15.780 04:03:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:15.780 04:03:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.780 04:03:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:15.780 04:03:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:15.780 04:03:29 -- nvmf/common.sh@105 -- # continue 2 00:12:15.780 04:03:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:15.780 04:03:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.780 04:03:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:15.780 04:03:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:15.780 04:03:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:15.780 04:03:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:15.780 04:03:29 -- nvmf/common.sh@105 -- # continue 2 00:12:15.780 04:03:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:15.780 04:03:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:15.780 04:03:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:15.780 04:03:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:15.780 04:03:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:15.780 04:03:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:15.780 04:03:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:15.780 04:03:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:15.780 04:03:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:15.780 04:03:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:15.780 04:03:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:15.780 04:03:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:15.780 04:03:29 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:15.780 192.168.100.9' 00:12:15.780 04:03:29 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:15.780 192.168.100.9' 00:12:15.780 04:03:29 -- nvmf/common.sh@446 -- # head -n 1 00:12:15.780 04:03:29 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:15.780 04:03:29 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:15.780 192.168.100.9' 00:12:15.780 04:03:29 -- nvmf/common.sh@447 -- # tail -n +2 00:12:15.780 04:03:29 -- nvmf/common.sh@447 -- # head -n 1 00:12:15.780 04:03:29 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:15.780 04:03:29 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:15.780 04:03:29 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:15.780 04:03:29 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:15.780 04:03:29 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:15.780 04:03:29 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:15.780 04:03:29 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:12:15.780 04:03:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:15.780 04:03:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:15.780 04:03:29 -- common/autotest_common.sh@10 -- # set +x 00:12:15.780 04:03:29 -- nvmf/common.sh@470 -- # nvmfpid=249137 00:12:15.780 04:03:29 -- nvmf/common.sh@471 -- # waitforlisten 249137 00:12:15.780 04:03:29 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:15.780 04:03:29 -- common/autotest_common.sh@817 -- # '[' -z 249137 ']' 00:12:15.780 04:03:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.780 04:03:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:15.780 04:03:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.780 04:03:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:15.780 04:03:29 -- common/autotest_common.sh@10 -- # set +x 00:12:15.780 [2024-04-19 04:03:29.281170] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:12:15.780 [2024-04-19 04:03:29.281221] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.780 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.780 [2024-04-19 04:03:29.334715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.780 [2024-04-19 04:03:29.405341] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.780 [2024-04-19 04:03:29.405376] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.780 [2024-04-19 04:03:29.405383] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.780 [2024-04-19 04:03:29.405388] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.780 [2024-04-19 04:03:29.405392] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.780 [2024-04-19 04:03:29.405433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.780 04:03:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:15.780 04:03:30 -- common/autotest_common.sh@850 -- # return 0 00:12:15.780 04:03:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:15.780 04:03:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:15.780 04:03:30 -- common/autotest_common.sh@10 -- # set +x 00:12:15.780 04:03:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.780 04:03:30 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:15.780 [2024-04-19 04:03:30.248081] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f09520/0x1f0da10) succeed. 00:12:15.780 [2024-04-19 04:03:30.255998] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f0aa20/0x1f4f0a0) succeed. 00:12:16.040 04:03:30 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:12:16.040 04:03:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:16.040 04:03:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.040 04:03:30 -- common/autotest_common.sh@10 -- # set +x 00:12:16.040 ************************************ 00:12:16.040 START TEST lvs_grow_clean 00:12:16.040 ************************************ 00:12:16.040 04:03:30 -- common/autotest_common.sh@1111 -- # lvs_grow 00:12:16.040 04:03:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:16.040 04:03:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:16.040 04:03:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:16.040 04:03:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:16.040 04:03:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:16.040 04:03:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:16.040 04:03:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:16.040 04:03:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:16.040 04:03:30 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:16.300 04:03:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:16.300 04:03:30 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:16.300 04:03:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:16.300 04:03:30 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:16.300 04:03:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:16.559 04:03:30 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:16.559 04:03:30 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:16.559 04:03:30 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 lvol 150 00:12:16.559 04:03:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=1f552a89-40cb-4181-9459-81f973d990e0 00:12:16.559 04:03:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:16.818 04:03:31 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:16.818 [2024-04-19 04:03:31.217465] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:16.818 [2024-04-19 04:03:31.217516] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:16.818 true 00:12:16.818 04:03:31 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:16.818 04:03:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:17.077 04:03:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:17.077 04:03:31 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:17.077 04:03:31 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1f552a89-40cb-4181-9459-81f973d990e0 00:12:17.337 04:03:31 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:17.337 [2024-04-19 04:03:31.787358] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:17.337 04:03:31 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:17.596 04:03:31 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=249664 00:12:17.596 04:03:31 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:17.596 04:03:31 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:17.596 04:03:31 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 249664 /var/tmp/bdevperf.sock 00:12:17.596 04:03:31 -- common/autotest_common.sh@817 -- # '[' -z 249664 ']' 00:12:17.596 04:03:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:17.596 04:03:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:17.597 04:03:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:17.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:17.597 04:03:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:17.597 04:03:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.597 [2024-04-19 04:03:31.964975] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:12:17.597 [2024-04-19 04:03:31.965016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249664 ] 00:12:17.597 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.597 [2024-04-19 04:03:32.010820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.597 [2024-04-19 04:03:32.083479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.536 04:03:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:18.536 04:03:32 -- common/autotest_common.sh@850 -- # return 0 00:12:18.536 04:03:32 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:18.536 Nvme0n1 00:12:18.536 04:03:32 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:18.536 [ 00:12:18.536 { 00:12:18.536 "name": "Nvme0n1", 00:12:18.536 "aliases": [ 00:12:18.536 "1f552a89-40cb-4181-9459-81f973d990e0" 00:12:18.536 ], 00:12:18.536 "product_name": "NVMe disk", 00:12:18.536 "block_size": 4096, 00:12:18.536 "num_blocks": 38912, 00:12:18.536 "uuid": "1f552a89-40cb-4181-9459-81f973d990e0", 00:12:18.536 "assigned_rate_limits": { 00:12:18.536 "rw_ios_per_sec": 0, 00:12:18.536 "rw_mbytes_per_sec": 0, 00:12:18.536 "r_mbytes_per_sec": 0, 00:12:18.536 "w_mbytes_per_sec": 0 00:12:18.536 }, 00:12:18.536 "claimed": false, 00:12:18.536 "zoned": false, 00:12:18.536 "supported_io_types": { 00:12:18.536 "read": true, 00:12:18.536 "write": true, 00:12:18.536 "unmap": true, 00:12:18.536 "write_zeroes": true, 00:12:18.536 "flush": true, 00:12:18.536 "reset": true, 00:12:18.536 "compare": true, 00:12:18.536 "compare_and_write": true, 00:12:18.536 "abort": true, 00:12:18.536 "nvme_admin": true, 00:12:18.536 "nvme_io": true 00:12:18.536 }, 00:12:18.536 "memory_domains": [ 00:12:18.536 { 00:12:18.536 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:12:18.536 "dma_device_type": 0 00:12:18.536 } 00:12:18.536 ], 00:12:18.536 "driver_specific": { 00:12:18.536 "nvme": [ 00:12:18.536 { 00:12:18.536 "trid": { 00:12:18.536 "trtype": "RDMA", 00:12:18.536 "adrfam": "IPv4", 00:12:18.536 "traddr": "192.168.100.8", 00:12:18.536 "trsvcid": "4420", 00:12:18.536 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:18.536 }, 00:12:18.536 "ctrlr_data": { 00:12:18.536 "cntlid": 1, 00:12:18.536 "vendor_id": "0x8086", 00:12:18.536 "model_number": "SPDK bdev Controller", 00:12:18.536 "serial_number": "SPDK0", 00:12:18.536 "firmware_revision": "24.05", 00:12:18.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:18.536 "oacs": { 00:12:18.536 "security": 0, 00:12:18.536 "format": 0, 00:12:18.536 "firmware": 0, 00:12:18.536 "ns_manage": 0 00:12:18.536 }, 00:12:18.536 "multi_ctrlr": true, 00:12:18.536 "ana_reporting": false 00:12:18.536 }, 00:12:18.536 "vs": { 00:12:18.536 "nvme_version": "1.3" 00:12:18.536 }, 00:12:18.536 "ns_data": { 00:12:18.536 "id": 1, 00:12:18.536 "can_share": true 00:12:18.536 } 00:12:18.536 } 00:12:18.536 ], 00:12:18.536 "mp_policy": "active_passive" 00:12:18.536 } 00:12:18.536 } 00:12:18.536 ] 00:12:18.796 04:03:33 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=249750 00:12:18.796 04:03:33 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:18.796 04:03:33 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:18.796 Running I/O for 10 seconds... 00:12:19.733 Latency(us) 00:12:19.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.733 Nvme0n1 : 1.00 39009.00 152.38 0.00 0.00 0.00 0.00 0.00 00:12:19.733 =================================================================================================================== 00:12:19.733 Total : 39009.00 152.38 0.00 0.00 0.00 0.00 0.00 00:12:19.733 00:12:20.671 04:03:35 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:20.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.671 Nvme0n1 : 2.00 39328.50 153.63 0.00 0.00 0.00 0.00 0.00 00:12:20.671 =================================================================================================================== 00:12:20.671 Total : 39328.50 153.63 0.00 0.00 0.00 0.00 0.00 00:12:20.671 00:12:20.930 true 00:12:20.930 04:03:35 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:20.930 04:03:35 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:20.930 04:03:35 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:20.930 04:03:35 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:20.930 04:03:35 -- target/nvmf_lvs_grow.sh@65 -- # wait 249750 00:12:21.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.868 Nvme0n1 : 3.00 39433.33 154.04 0.00 0.00 0.00 0.00 0.00 00:12:21.868 =================================================================================================================== 00:12:21.868 Total : 39433.33 154.04 0.00 0.00 0.00 0.00 0.00 00:12:21.868 00:12:22.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.806 Nvme0n1 : 4.00 39551.00 154.50 0.00 0.00 0.00 0.00 0.00 00:12:22.806 =================================================================================================================== 00:12:22.806 Total : 39551.00 154.50 0.00 0.00 0.00 0.00 0.00 00:12:22.806 00:12:23.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.743 Nvme0n1 : 5.00 39621.60 154.77 0.00 0.00 0.00 0.00 0.00 00:12:23.743 =================================================================================================================== 00:12:23.743 Total : 39621.60 154.77 0.00 0.00 0.00 0.00 0.00 00:12:23.743 00:12:24.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.682 Nvme0n1 : 6.00 39670.33 154.96 0.00 0.00 0.00 0.00 0.00 00:12:24.682 =================================================================================================================== 00:12:24.682 Total : 39670.33 154.96 0.00 0.00 0.00 0.00 0.00 00:12:24.682 00:12:26.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.077 Nvme0n1 : 7.00 39671.71 154.97 0.00 0.00 0.00 0.00 0.00 00:12:26.077 =================================================================================================================== 00:12:26.077 Total : 39671.71 154.97 0.00 0.00 0.00 0.00 0.00 00:12:26.077 00:12:26.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.644 Nvme0n1 : 8.00 39655.50 154.90 0.00 0.00 0.00 0.00 0.00 00:12:26.644 =================================================================================================================== 00:12:26.644 Total : 39655.50 154.90 0.00 0.00 0.00 0.00 0.00 00:12:26.644 00:12:28.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.023 Nvme0n1 : 9.00 39691.33 155.04 0.00 0.00 0.00 0.00 0.00 00:12:28.023 =================================================================================================================== 00:12:28.023 Total : 39691.33 155.04 0.00 0.00 0.00 0.00 0.00 00:12:28.023 00:12:28.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.960 Nvme0n1 : 10.00 39719.00 155.15 0.00 0.00 0.00 0.00 0.00 00:12:28.960 =================================================================================================================== 00:12:28.960 Total : 39719.00 155.15 0.00 0.00 0.00 0.00 0.00 00:12:28.960 00:12:28.960 00:12:28.960 Latency(us) 00:12:28.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.960 Nvme0n1 : 10.00 39719.51 155.15 0.00 0.00 3220.02 2257.35 10048.85 00:12:28.960 =================================================================================================================== 00:12:28.960 Total : 39719.51 155.15 0.00 0.00 3220.02 2257.35 10048.85 00:12:28.960 0 00:12:28.960 04:03:43 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 249664 00:12:28.960 04:03:43 -- common/autotest_common.sh@936 -- # '[' -z 249664 ']' 00:12:28.960 04:03:43 -- common/autotest_common.sh@940 -- # kill -0 249664 00:12:28.960 04:03:43 -- common/autotest_common.sh@941 -- # uname 00:12:28.960 04:03:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:28.960 04:03:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 249664 00:12:28.960 04:03:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:28.960 04:03:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:28.960 04:03:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 249664' 00:12:28.960 killing process with pid 249664 00:12:28.960 04:03:43 -- common/autotest_common.sh@955 -- # kill 249664 00:12:28.960 Received shutdown signal, test time was about 10.000000 seconds 00:12:28.960 00:12:28.960 Latency(us) 00:12:28.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.960 =================================================================================================================== 00:12:28.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:28.960 04:03:43 -- common/autotest_common.sh@960 -- # wait 249664 00:12:28.960 04:03:43 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:29.219 04:03:43 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:29.219 04:03:43 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:12:29.479 04:03:43 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:12:29.479 04:03:43 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:12:29.479 04:03:43 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:29.479 [2024-04-19 04:03:43.928628] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:29.479 04:03:43 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:29.479 04:03:43 -- common/autotest_common.sh@638 -- # local es=0 00:12:29.479 04:03:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:29.479 04:03:43 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:29.479 04:03:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:29.479 04:03:43 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:29.479 04:03:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:29.479 04:03:43 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:29.479 04:03:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:29.479 04:03:43 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:29.479 04:03:43 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:29.479 04:03:43 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:29.738 request: 00:12:29.738 { 00:12:29.738 "uuid": "34201cee-c137-4ffb-b5f8-1a2da53d0e91", 00:12:29.738 "method": "bdev_lvol_get_lvstores", 00:12:29.738 "req_id": 1 00:12:29.738 } 00:12:29.738 Got JSON-RPC error response 00:12:29.738 response: 00:12:29.738 { 00:12:29.738 "code": -19, 00:12:29.738 "message": "No such device" 00:12:29.738 } 00:12:29.738 04:03:44 -- common/autotest_common.sh@641 -- # es=1 00:12:29.738 04:03:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:29.738 04:03:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:29.738 04:03:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:29.738 04:03:44 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:29.738 aio_bdev 00:12:29.738 04:03:44 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 1f552a89-40cb-4181-9459-81f973d990e0 00:12:29.738 04:03:44 -- common/autotest_common.sh@885 -- # local bdev_name=1f552a89-40cb-4181-9459-81f973d990e0 00:12:29.738 04:03:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:29.738 04:03:44 -- common/autotest_common.sh@887 -- # local i 00:12:29.738 04:03:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:29.738 04:03:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:29.738 04:03:44 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:29.997 04:03:44 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1f552a89-40cb-4181-9459-81f973d990e0 -t 2000 00:12:30.257 [ 00:12:30.257 { 00:12:30.257 "name": "1f552a89-40cb-4181-9459-81f973d990e0", 00:12:30.257 "aliases": [ 00:12:30.257 "lvs/lvol" 00:12:30.257 ], 00:12:30.257 "product_name": "Logical Volume", 00:12:30.257 "block_size": 4096, 00:12:30.257 "num_blocks": 38912, 00:12:30.257 "uuid": "1f552a89-40cb-4181-9459-81f973d990e0", 00:12:30.257 "assigned_rate_limits": { 00:12:30.257 "rw_ios_per_sec": 0, 00:12:30.257 "rw_mbytes_per_sec": 0, 00:12:30.257 "r_mbytes_per_sec": 0, 00:12:30.257 "w_mbytes_per_sec": 0 00:12:30.257 }, 00:12:30.257 "claimed": false, 00:12:30.257 "zoned": false, 00:12:30.257 "supported_io_types": { 00:12:30.257 "read": true, 00:12:30.257 "write": true, 00:12:30.257 "unmap": true, 00:12:30.257 "write_zeroes": true, 00:12:30.257 "flush": false, 00:12:30.257 "reset": true, 00:12:30.257 "compare": false, 00:12:30.257 "compare_and_write": false, 00:12:30.257 "abort": false, 00:12:30.257 "nvme_admin": false, 00:12:30.257 "nvme_io": false 00:12:30.257 }, 00:12:30.257 "driver_specific": { 00:12:30.257 "lvol": { 00:12:30.257 "lvol_store_uuid": "34201cee-c137-4ffb-b5f8-1a2da53d0e91", 00:12:30.257 "base_bdev": "aio_bdev", 00:12:30.257 "thin_provision": false, 00:12:30.257 "snapshot": false, 00:12:30.257 "clone": false, 00:12:30.257 "esnap_clone": false 00:12:30.257 } 00:12:30.257 } 00:12:30.257 } 00:12:30.257 ] 00:12:30.257 04:03:44 -- common/autotest_common.sh@893 -- # return 0 00:12:30.257 04:03:44 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:30.257 04:03:44 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:12:30.257 04:03:44 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:12:30.257 04:03:44 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:30.257 04:03:44 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:12:30.516 04:03:44 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:12:30.516 04:03:44 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1f552a89-40cb-4181-9459-81f973d990e0 00:12:30.516 04:03:45 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 34201cee-c137-4ffb-b5f8-1a2da53d0e91 00:12:30.776 04:03:45 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:31.035 04:03:45 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:31.035 00:12:31.035 real 0m14.918s 00:12:31.035 user 0m15.000s 00:12:31.035 sys 0m0.839s 00:12:31.035 04:03:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.035 04:03:45 -- common/autotest_common.sh@10 -- # set +x 00:12:31.035 ************************************ 00:12:31.035 END TEST lvs_grow_clean 00:12:31.035 ************************************ 00:12:31.035 04:03:45 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:31.035 04:03:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.035 04:03:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.035 04:03:45 -- common/autotest_common.sh@10 -- # set +x 00:12:31.035 ************************************ 00:12:31.035 START TEST lvs_grow_dirty 00:12:31.035 ************************************ 00:12:31.035 04:03:45 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:12:31.035 04:03:45 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:31.035 04:03:45 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:31.035 04:03:45 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:31.035 04:03:45 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:31.035 04:03:45 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:31.035 04:03:45 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:31.035 04:03:45 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:31.035 04:03:45 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:31.035 04:03:45 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:31.293 04:03:45 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:31.293 04:03:45 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:31.552 04:03:45 -- target/nvmf_lvs_grow.sh@28 -- # lvs=bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:31.552 04:03:45 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:31.552 04:03:45 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:31.552 04:03:46 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:31.552 04:03:46 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:31.552 04:03:46 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 lvol 150 00:12:31.811 04:03:46 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3 00:12:31.811 04:03:46 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:31.811 04:03:46 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:31.811 [2024-04-19 04:03:46.303595] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:31.811 [2024-04-19 04:03:46.303646] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:31.811 true 00:12:31.811 04:03:46 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:31.811 04:03:46 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:32.071 04:03:46 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:32.071 04:03:46 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:32.331 04:03:46 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3 00:12:32.331 04:03:46 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:32.591 04:03:46 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:32.591 04:03:47 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=252415 00:12:32.591 04:03:47 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:32.591 04:03:47 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:32.591 04:03:47 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 252415 /var/tmp/bdevperf.sock 00:12:32.591 04:03:47 -- common/autotest_common.sh@817 -- # '[' -z 252415 ']' 00:12:32.591 04:03:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:32.591 04:03:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:32.591 04:03:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:32.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:32.591 04:03:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:32.591 04:03:47 -- common/autotest_common.sh@10 -- # set +x 00:12:32.591 [2024-04-19 04:03:47.091828] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:12:32.592 [2024-04-19 04:03:47.091871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252415 ] 00:12:32.592 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.851 [2024-04-19 04:03:47.141208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.851 [2024-04-19 04:03:47.214898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.419 04:03:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:33.420 04:03:47 -- common/autotest_common.sh@850 -- # return 0 00:12:33.420 04:03:47 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:33.680 Nvme0n1 00:12:33.680 04:03:48 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:33.939 [ 00:12:33.939 { 00:12:33.939 "name": "Nvme0n1", 00:12:33.939 "aliases": [ 00:12:33.939 "b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3" 00:12:33.939 ], 00:12:33.939 "product_name": "NVMe disk", 00:12:33.939 "block_size": 4096, 00:12:33.939 "num_blocks": 38912, 00:12:33.939 "uuid": "b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3", 00:12:33.939 "assigned_rate_limits": { 00:12:33.939 "rw_ios_per_sec": 0, 00:12:33.939 "rw_mbytes_per_sec": 0, 00:12:33.939 "r_mbytes_per_sec": 0, 00:12:33.939 "w_mbytes_per_sec": 0 00:12:33.939 }, 00:12:33.939 "claimed": false, 00:12:33.939 "zoned": false, 00:12:33.939 "supported_io_types": { 00:12:33.939 "read": true, 00:12:33.939 "write": true, 00:12:33.939 "unmap": true, 00:12:33.939 "write_zeroes": true, 00:12:33.939 "flush": true, 00:12:33.939 "reset": true, 00:12:33.939 "compare": true, 00:12:33.939 "compare_and_write": true, 00:12:33.939 "abort": true, 00:12:33.939 "nvme_admin": true, 00:12:33.939 "nvme_io": true 00:12:33.939 }, 00:12:33.939 "memory_domains": [ 00:12:33.939 { 00:12:33.939 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:12:33.939 "dma_device_type": 0 00:12:33.939 } 00:12:33.939 ], 00:12:33.939 "driver_specific": { 00:12:33.939 "nvme": [ 00:12:33.939 { 00:12:33.939 "trid": { 00:12:33.939 "trtype": "RDMA", 00:12:33.939 "adrfam": "IPv4", 00:12:33.939 "traddr": "192.168.100.8", 00:12:33.939 "trsvcid": "4420", 00:12:33.939 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:33.939 }, 00:12:33.939 "ctrlr_data": { 00:12:33.939 "cntlid": 1, 00:12:33.939 "vendor_id": "0x8086", 00:12:33.939 "model_number": "SPDK bdev Controller", 00:12:33.939 "serial_number": "SPDK0", 00:12:33.939 "firmware_revision": "24.05", 00:12:33.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:33.939 "oacs": { 00:12:33.939 "security": 0, 00:12:33.939 "format": 0, 00:12:33.939 "firmware": 0, 00:12:33.939 "ns_manage": 0 00:12:33.939 }, 00:12:33.939 "multi_ctrlr": true, 00:12:33.939 "ana_reporting": false 00:12:33.939 }, 00:12:33.939 "vs": { 00:12:33.939 "nvme_version": "1.3" 00:12:33.939 }, 00:12:33.939 "ns_data": { 00:12:33.939 "id": 1, 00:12:33.939 "can_share": true 00:12:33.939 } 00:12:33.939 } 00:12:33.939 ], 00:12:33.939 "mp_policy": "active_passive" 00:12:33.939 } 00:12:33.939 } 00:12:33.939 ] 00:12:33.939 04:03:48 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=252675 00:12:33.939 04:03:48 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:33.939 04:03:48 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:33.939 Running I/O for 10 seconds... 00:12:34.876 Latency(us) 00:12:34.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.876 Nvme0n1 : 1.00 38879.00 151.87 0.00 0.00 0.00 0.00 0.00 00:12:34.876 =================================================================================================================== 00:12:34.876 Total : 38879.00 151.87 0.00 0.00 0.00 0.00 0.00 00:12:34.876 00:12:35.813 04:03:50 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:36.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.071 Nvme0n1 : 2.00 39043.00 152.51 0.00 0.00 0.00 0.00 0.00 00:12:36.071 =================================================================================================================== 00:12:36.071 Total : 39043.00 152.51 0.00 0.00 0.00 0.00 0.00 00:12:36.071 00:12:36.071 true 00:12:36.071 04:03:50 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:36.071 04:03:50 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:36.330 04:03:50 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:36.330 04:03:50 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:36.331 04:03:50 -- target/nvmf_lvs_grow.sh@65 -- # wait 252675 00:12:36.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.902 Nvme0n1 : 3.00 39200.00 153.12 0.00 0.00 0.00 0.00 0.00 00:12:36.902 =================================================================================================================== 00:12:36.902 Total : 39200.00 153.12 0.00 0.00 0.00 0.00 0.00 00:12:36.902 00:12:37.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.842 Nvme0n1 : 4.00 39376.00 153.81 0.00 0.00 0.00 0.00 0.00 00:12:37.842 =================================================================================================================== 00:12:37.842 Total : 39376.00 153.81 0.00 0.00 0.00 0.00 0.00 00:12:37.842 00:12:39.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.226 Nvme0n1 : 5.00 39494.80 154.28 0.00 0.00 0.00 0.00 0.00 00:12:39.226 =================================================================================================================== 00:12:39.226 Total : 39494.80 154.28 0.00 0.00 0.00 0.00 0.00 00:12:39.226 00:12:40.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.174 Nvme0n1 : 6.00 39573.17 154.58 0.00 0.00 0.00 0.00 0.00 00:12:40.174 =================================================================================================================== 00:12:40.174 Total : 39573.17 154.58 0.00 0.00 0.00 0.00 0.00 00:12:40.174 00:12:41.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.112 Nvme0n1 : 7.00 39629.29 154.80 0.00 0.00 0.00 0.00 0.00 00:12:41.112 =================================================================================================================== 00:12:41.112 Total : 39629.29 154.80 0.00 0.00 0.00 0.00 0.00 00:12:41.112 00:12:42.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.052 Nvme0n1 : 8.00 39680.50 155.00 0.00 0.00 0.00 0.00 0.00 00:12:42.052 =================================================================================================================== 00:12:42.052 Total : 39680.50 155.00 0.00 0.00 0.00 0.00 0.00 00:12:42.052 00:12:42.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.996 Nvme0n1 : 9.00 39715.33 155.14 0.00 0.00 0.00 0.00 0.00 00:12:42.996 =================================================================================================================== 00:12:42.996 Total : 39715.33 155.14 0.00 0.00 0.00 0.00 0.00 00:12:42.996 00:12:43.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.937 Nvme0n1 : 10.00 39740.50 155.24 0.00 0.00 0.00 0.00 0.00 00:12:43.937 =================================================================================================================== 00:12:43.937 Total : 39740.50 155.24 0.00 0.00 0.00 0.00 0.00 00:12:43.937 00:12:43.937 00:12:43.937 Latency(us) 00:12:43.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.937 Nvme0n1 : 10.00 39741.33 155.24 0.00 0.00 3218.19 2160.26 13981.01 00:12:43.937 =================================================================================================================== 00:12:43.937 Total : 39741.33 155.24 0.00 0.00 3218.19 2160.26 13981.01 00:12:43.937 0 00:12:43.937 04:03:58 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 252415 00:12:43.937 04:03:58 -- common/autotest_common.sh@936 -- # '[' -z 252415 ']' 00:12:43.937 04:03:58 -- common/autotest_common.sh@940 -- # kill -0 252415 00:12:43.937 04:03:58 -- common/autotest_common.sh@941 -- # uname 00:12:43.937 04:03:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:43.937 04:03:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 252415 00:12:43.937 04:03:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:43.937 04:03:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:43.937 04:03:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 252415' 00:12:43.937 killing process with pid 252415 00:12:43.937 04:03:58 -- common/autotest_common.sh@955 -- # kill 252415 00:12:43.937 Received shutdown signal, test time was about 10.000000 seconds 00:12:43.937 00:12:43.937 Latency(us) 00:12:43.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.937 =================================================================================================================== 00:12:43.937 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:43.937 04:03:58 -- common/autotest_common.sh@960 -- # wait 252415 00:12:44.198 04:03:58 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:44.458 04:03:58 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:44.458 04:03:58 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:12:44.718 04:03:58 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:12:44.718 04:03:58 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:12:44.718 04:03:58 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 249137 00:12:44.718 04:03:58 -- target/nvmf_lvs_grow.sh@74 -- # wait 249137 00:12:44.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 249137 Killed "${NVMF_APP[@]}" "$@" 00:12:44.718 04:03:59 -- target/nvmf_lvs_grow.sh@74 -- # true 00:12:44.718 04:03:59 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:12:44.718 04:03:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:44.718 04:03:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:44.718 04:03:59 -- common/autotest_common.sh@10 -- # set +x 00:12:44.718 04:03:59 -- nvmf/common.sh@470 -- # nvmfpid=254531 00:12:44.718 04:03:59 -- nvmf/common.sh@471 -- # waitforlisten 254531 00:12:44.718 04:03:59 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:44.718 04:03:59 -- common/autotest_common.sh@817 -- # '[' -z 254531 ']' 00:12:44.718 04:03:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.718 04:03:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:44.718 04:03:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.718 04:03:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:44.718 04:03:59 -- common/autotest_common.sh@10 -- # set +x 00:12:44.718 [2024-04-19 04:03:59.076903] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:12:44.718 [2024-04-19 04:03:59.076945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.718 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.718 [2024-04-19 04:03:59.127464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.718 [2024-04-19 04:03:59.198532] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.718 [2024-04-19 04:03:59.198566] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.718 [2024-04-19 04:03:59.198573] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.718 [2024-04-19 04:03:59.198578] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.718 [2024-04-19 04:03:59.198583] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.718 [2024-04-19 04:03:59.198602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.659 04:03:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:45.659 04:03:59 -- common/autotest_common.sh@850 -- # return 0 00:12:45.659 04:03:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:45.659 04:03:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:45.659 04:03:59 -- common/autotest_common.sh@10 -- # set +x 00:12:45.659 04:03:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.659 04:03:59 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:45.659 [2024-04-19 04:04:00.022191] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:45.659 [2024-04-19 04:04:00.022268] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:45.659 [2024-04-19 04:04:00.022291] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:45.659 04:04:00 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:12:45.659 04:04:00 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3 00:12:45.659 04:04:00 -- common/autotest_common.sh@885 -- # local bdev_name=b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3 00:12:45.659 04:04:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:45.659 04:04:00 -- common/autotest_common.sh@887 -- # local i 00:12:45.659 04:04:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:45.659 04:04:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:45.659 04:04:00 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:45.920 04:04:00 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3 -t 2000 00:12:45.920 [ 00:12:45.920 { 00:12:45.920 "name": "b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3", 00:12:45.920 "aliases": [ 00:12:45.920 "lvs/lvol" 00:12:45.920 ], 00:12:45.920 "product_name": "Logical Volume", 00:12:45.920 "block_size": 4096, 00:12:45.920 "num_blocks": 38912, 00:12:45.920 "uuid": "b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3", 00:12:45.920 "assigned_rate_limits": { 00:12:45.920 "rw_ios_per_sec": 0, 00:12:45.920 "rw_mbytes_per_sec": 0, 00:12:45.920 "r_mbytes_per_sec": 0, 00:12:45.920 "w_mbytes_per_sec": 0 00:12:45.920 }, 00:12:45.920 "claimed": false, 00:12:45.920 "zoned": false, 00:12:45.920 "supported_io_types": { 00:12:45.920 "read": true, 00:12:45.920 "write": true, 00:12:45.920 "unmap": true, 00:12:45.920 "write_zeroes": true, 00:12:45.920 "flush": false, 00:12:45.920 "reset": true, 00:12:45.920 "compare": false, 00:12:45.920 "compare_and_write": false, 00:12:45.920 "abort": false, 00:12:45.920 "nvme_admin": false, 00:12:45.920 "nvme_io": false 00:12:45.920 }, 00:12:45.920 "driver_specific": { 00:12:45.920 "lvol": { 00:12:45.920 "lvol_store_uuid": "bdf0aab8-fb36-4c94-8703-c02cc88a9f24", 00:12:45.920 "base_bdev": "aio_bdev", 00:12:45.920 "thin_provision": false, 00:12:45.920 "snapshot": false, 00:12:45.920 "clone": false, 00:12:45.920 "esnap_clone": false 00:12:45.920 } 00:12:45.920 } 00:12:45.920 } 00:12:45.920 ] 00:12:45.921 04:04:00 -- common/autotest_common.sh@893 -- # return 0 00:12:45.921 04:04:00 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:12:45.921 04:04:00 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:46.182 04:04:00 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:12:46.182 04:04:00 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:46.182 04:04:00 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:12:46.182 04:04:00 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:12:46.182 04:04:00 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:46.443 [2024-04-19 04:04:00.818588] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:46.443 04:04:00 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:46.443 04:04:00 -- common/autotest_common.sh@638 -- # local es=0 00:12:46.443 04:04:00 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:46.443 04:04:00 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:46.443 04:04:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:46.443 04:04:00 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:46.443 04:04:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:46.443 04:04:00 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:46.443 04:04:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:46.443 04:04:00 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:46.443 04:04:00 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:46.443 04:04:00 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:46.704 request: 00:12:46.704 { 00:12:46.704 "uuid": "bdf0aab8-fb36-4c94-8703-c02cc88a9f24", 00:12:46.704 "method": "bdev_lvol_get_lvstores", 00:12:46.704 "req_id": 1 00:12:46.704 } 00:12:46.704 Got JSON-RPC error response 00:12:46.704 response: 00:12:46.704 { 00:12:46.704 "code": -19, 00:12:46.704 "message": "No such device" 00:12:46.704 } 00:12:46.704 04:04:01 -- common/autotest_common.sh@641 -- # es=1 00:12:46.704 04:04:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:46.704 04:04:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:46.704 04:04:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:46.704 04:04:01 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:46.704 aio_bdev 00:12:46.704 04:04:01 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3 00:12:46.704 04:04:01 -- common/autotest_common.sh@885 -- # local bdev_name=b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3 00:12:46.704 04:04:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:46.704 04:04:01 -- common/autotest_common.sh@887 -- # local i 00:12:46.704 04:04:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:46.705 04:04:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:46.705 04:04:01 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:46.966 04:04:01 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3 -t 2000 00:12:46.966 [ 00:12:46.966 { 00:12:46.966 "name": "b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3", 00:12:46.966 "aliases": [ 00:12:46.966 "lvs/lvol" 00:12:46.966 ], 00:12:46.966 "product_name": "Logical Volume", 00:12:46.966 "block_size": 4096, 00:12:46.966 "num_blocks": 38912, 00:12:46.966 "uuid": "b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3", 00:12:46.966 "assigned_rate_limits": { 00:12:46.966 "rw_ios_per_sec": 0, 00:12:46.966 "rw_mbytes_per_sec": 0, 00:12:46.966 "r_mbytes_per_sec": 0, 00:12:46.966 "w_mbytes_per_sec": 0 00:12:46.966 }, 00:12:46.966 "claimed": false, 00:12:46.966 "zoned": false, 00:12:46.966 "supported_io_types": { 00:12:46.966 "read": true, 00:12:46.966 "write": true, 00:12:46.966 "unmap": true, 00:12:46.966 "write_zeroes": true, 00:12:46.966 "flush": false, 00:12:46.966 "reset": true, 00:12:46.966 "compare": false, 00:12:46.966 "compare_and_write": false, 00:12:46.966 "abort": false, 00:12:46.966 "nvme_admin": false, 00:12:46.966 "nvme_io": false 00:12:46.966 }, 00:12:46.966 "driver_specific": { 00:12:46.966 "lvol": { 00:12:46.966 "lvol_store_uuid": "bdf0aab8-fb36-4c94-8703-c02cc88a9f24", 00:12:46.966 "base_bdev": "aio_bdev", 00:12:46.966 "thin_provision": false, 00:12:46.966 "snapshot": false, 00:12:46.966 "clone": false, 00:12:46.966 "esnap_clone": false 00:12:46.966 } 00:12:46.966 } 00:12:46.966 } 00:12:46.966 ] 00:12:46.966 04:04:01 -- common/autotest_common.sh@893 -- # return 0 00:12:46.966 04:04:01 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:46.966 04:04:01 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:12:47.227 04:04:01 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:12:47.227 04:04:01 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:47.227 04:04:01 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:12:47.486 04:04:01 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:12:47.486 04:04:01 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b2e40ff2-1eb4-4c88-8140-e2bf3baa39f3 00:12:47.486 04:04:01 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bdf0aab8-fb36-4c94-8703-c02cc88a9f24 00:12:47.747 04:04:02 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:47.747 04:04:02 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:48.008 00:12:48.008 real 0m16.757s 00:12:48.008 user 0m44.111s 00:12:48.008 sys 0m2.679s 00:12:48.008 04:04:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:48.008 04:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:48.008 ************************************ 00:12:48.009 END TEST lvs_grow_dirty 00:12:48.009 ************************************ 00:12:48.009 04:04:02 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:48.009 04:04:02 -- common/autotest_common.sh@794 -- # type=--id 00:12:48.009 04:04:02 -- common/autotest_common.sh@795 -- # id=0 00:12:48.009 04:04:02 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:12:48.009 04:04:02 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:48.009 04:04:02 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:12:48.009 04:04:02 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:12:48.009 04:04:02 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:12:48.009 04:04:02 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:48.009 nvmf_trace.0 00:12:48.009 04:04:02 -- common/autotest_common.sh@809 -- # return 0 00:12:48.009 04:04:02 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:48.009 04:04:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:48.009 04:04:02 -- nvmf/common.sh@117 -- # sync 00:12:48.009 04:04:02 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:48.009 04:04:02 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:48.009 04:04:02 -- nvmf/common.sh@120 -- # set +e 00:12:48.009 04:04:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.009 04:04:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:48.009 rmmod nvme_rdma 00:12:48.009 rmmod nvme_fabrics 00:12:48.009 04:04:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.009 04:04:02 -- nvmf/common.sh@124 -- # set -e 00:12:48.009 04:04:02 -- nvmf/common.sh@125 -- # return 0 00:12:48.009 04:04:02 -- nvmf/common.sh@478 -- # '[' -n 254531 ']' 00:12:48.009 04:04:02 -- nvmf/common.sh@479 -- # killprocess 254531 00:12:48.009 04:04:02 -- common/autotest_common.sh@936 -- # '[' -z 254531 ']' 00:12:48.009 04:04:02 -- common/autotest_common.sh@940 -- # kill -0 254531 00:12:48.009 04:04:02 -- common/autotest_common.sh@941 -- # uname 00:12:48.009 04:04:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:48.009 04:04:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 254531 00:12:48.009 04:04:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:48.009 04:04:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:48.009 04:04:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 254531' 00:12:48.009 killing process with pid 254531 00:12:48.009 04:04:02 -- common/autotest_common.sh@955 -- # kill 254531 00:12:48.009 04:04:02 -- common/autotest_common.sh@960 -- # wait 254531 00:12:48.269 04:04:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:48.269 04:04:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:48.269 00:12:48.269 real 0m39.028s 00:12:48.269 user 1m4.789s 00:12:48.269 sys 0m8.072s 00:12:48.269 04:04:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:48.269 04:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:48.269 ************************************ 00:12:48.269 END TEST nvmf_lvs_grow 00:12:48.269 ************************************ 00:12:48.269 04:04:02 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:12:48.269 04:04:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:48.269 04:04:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:48.269 04:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:48.531 ************************************ 00:12:48.531 START TEST nvmf_bdev_io_wait 00:12:48.531 ************************************ 00:12:48.531 04:04:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:12:48.531 * Looking for test storage... 00:12:48.531 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:48.531 04:04:02 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.531 04:04:02 -- nvmf/common.sh@7 -- # uname -s 00:12:48.531 04:04:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.531 04:04:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.531 04:04:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.531 04:04:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.531 04:04:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.531 04:04:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.531 04:04:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.531 04:04:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.531 04:04:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.531 04:04:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.531 04:04:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:48.531 04:04:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:12:48.531 04:04:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.531 04:04:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.531 04:04:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.531 04:04:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.531 04:04:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:48.531 04:04:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.531 04:04:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.531 04:04:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.531 04:04:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.531 04:04:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.531 04:04:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.531 04:04:02 -- paths/export.sh@5 -- # export PATH 00:12:48.531 04:04:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.531 04:04:02 -- nvmf/common.sh@47 -- # : 0 00:12:48.531 04:04:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.531 04:04:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.531 04:04:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.531 04:04:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.531 04:04:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.531 04:04:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.531 04:04:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.531 04:04:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.531 04:04:02 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.531 04:04:02 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:48.531 04:04:02 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:48.531 04:04:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:48.531 04:04:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.531 04:04:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:48.531 04:04:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:48.531 04:04:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:48.531 04:04:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.531 04:04:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.531 04:04:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.531 04:04:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:48.531 04:04:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:48.531 04:04:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:48.531 04:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:53.920 04:04:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:53.920 04:04:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:53.920 04:04:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:53.920 04:04:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:53.920 04:04:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:53.921 04:04:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:53.921 04:04:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:53.921 04:04:08 -- nvmf/common.sh@295 -- # net_devs=() 00:12:53.921 04:04:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:53.921 04:04:08 -- nvmf/common.sh@296 -- # e810=() 00:12:53.921 04:04:08 -- nvmf/common.sh@296 -- # local -ga e810 00:12:53.921 04:04:08 -- nvmf/common.sh@297 -- # x722=() 00:12:53.921 04:04:08 -- nvmf/common.sh@297 -- # local -ga x722 00:12:53.921 04:04:08 -- nvmf/common.sh@298 -- # mlx=() 00:12:53.921 04:04:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:53.921 04:04:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.921 04:04:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.921 04:04:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.921 04:04:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.921 04:04:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.921 04:04:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.921 04:04:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.921 04:04:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.921 04:04:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.921 04:04:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.921 04:04:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.921 04:04:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:53.921 04:04:08 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:53.921 04:04:08 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:53.921 04:04:08 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:53.921 04:04:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:53.921 04:04:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:53.921 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:53.921 04:04:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:53.921 04:04:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:53.921 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:53.921 04:04:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:53.921 04:04:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:53.921 04:04:08 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.921 04:04:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:53.921 04:04:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.921 04:04:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:53.921 Found net devices under 0000:18:00.0: mlx_0_0 00:12:53.921 04:04:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.921 04:04:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.921 04:04:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:53.921 04:04:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.921 04:04:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:53.921 Found net devices under 0000:18:00.1: mlx_0_1 00:12:53.921 04:04:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.921 04:04:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:53.921 04:04:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:53.921 04:04:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:53.921 04:04:08 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:53.921 04:04:08 -- nvmf/common.sh@58 -- # uname 00:12:53.921 04:04:08 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:53.921 04:04:08 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:53.921 04:04:08 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:53.921 04:04:08 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:53.921 04:04:08 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:53.921 04:04:08 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:53.921 04:04:08 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:53.921 04:04:08 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:53.921 04:04:08 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:53.921 04:04:08 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:53.921 04:04:08 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:53.921 04:04:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:53.921 04:04:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:53.921 04:04:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:53.921 04:04:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:53.921 04:04:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:53.921 04:04:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:53.921 04:04:08 -- nvmf/common.sh@105 -- # continue 2 00:12:53.921 04:04:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:53.921 04:04:08 -- nvmf/common.sh@105 -- # continue 2 00:12:53.921 04:04:08 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:53.921 04:04:08 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:53.921 04:04:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:53.921 04:04:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:53.921 04:04:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:53.921 04:04:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:53.921 04:04:08 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:53.921 04:04:08 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:53.921 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:53.921 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:12:53.921 altname enp24s0f0np0 00:12:53.921 altname ens785f0np0 00:12:53.921 inet 192.168.100.8/24 scope global mlx_0_0 00:12:53.921 valid_lft forever preferred_lft forever 00:12:53.921 04:04:08 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:53.921 04:04:08 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:53.921 04:04:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:53.921 04:04:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:53.921 04:04:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:53.921 04:04:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:53.921 04:04:08 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:53.921 04:04:08 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:53.921 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:53.921 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:12:53.921 altname enp24s0f1np1 00:12:53.921 altname ens785f1np1 00:12:53.921 inet 192.168.100.9/24 scope global mlx_0_1 00:12:53.921 valid_lft forever preferred_lft forever 00:12:53.921 04:04:08 -- nvmf/common.sh@411 -- # return 0 00:12:53.921 04:04:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:53.921 04:04:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:53.921 04:04:08 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:53.921 04:04:08 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:53.921 04:04:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:53.921 04:04:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:53.921 04:04:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:53.921 04:04:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:53.921 04:04:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:53.921 04:04:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:53.921 04:04:08 -- nvmf/common.sh@105 -- # continue 2 00:12:53.921 04:04:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:53.921 04:04:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:53.921 04:04:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:53.921 04:04:08 -- nvmf/common.sh@105 -- # continue 2 00:12:53.921 04:04:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:53.921 04:04:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:53.921 04:04:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:53.921 04:04:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:53.921 04:04:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:53.921 04:04:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:53.921 04:04:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:53.921 04:04:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:53.921 04:04:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:53.921 04:04:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:53.922 04:04:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:53.922 04:04:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:53.922 04:04:08 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:53.922 192.168.100.9' 00:12:53.922 04:04:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:53.922 192.168.100.9' 00:12:53.922 04:04:08 -- nvmf/common.sh@446 -- # head -n 1 00:12:53.922 04:04:08 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:53.922 04:04:08 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:53.922 192.168.100.9' 00:12:53.922 04:04:08 -- nvmf/common.sh@447 -- # tail -n +2 00:12:53.922 04:04:08 -- nvmf/common.sh@447 -- # head -n 1 00:12:53.922 04:04:08 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:53.922 04:04:08 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:53.922 04:04:08 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:53.922 04:04:08 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:53.922 04:04:08 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:53.922 04:04:08 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:53.922 04:04:08 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:53.922 04:04:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:53.922 04:04:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:53.922 04:04:08 -- common/autotest_common.sh@10 -- # set +x 00:12:53.922 04:04:08 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:53.922 04:04:08 -- nvmf/common.sh@470 -- # nvmfpid=258481 00:12:53.922 04:04:08 -- nvmf/common.sh@471 -- # waitforlisten 258481 00:12:53.922 04:04:08 -- common/autotest_common.sh@817 -- # '[' -z 258481 ']' 00:12:53.922 04:04:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.922 04:04:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:53.922 04:04:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.922 04:04:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:53.922 04:04:08 -- common/autotest_common.sh@10 -- # set +x 00:12:53.922 [2024-04-19 04:04:08.245834] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:12:53.922 [2024-04-19 04:04:08.245872] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.922 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.922 [2024-04-19 04:04:08.294518] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.922 [2024-04-19 04:04:08.367829] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.922 [2024-04-19 04:04:08.367866] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.922 [2024-04-19 04:04:08.367873] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.922 [2024-04-19 04:04:08.367878] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.922 [2024-04-19 04:04:08.367883] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.922 [2024-04-19 04:04:08.367920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.922 [2024-04-19 04:04:08.368014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.922 [2024-04-19 04:04:08.368096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.922 [2024-04-19 04:04:08.368097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.863 04:04:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:54.863 04:04:09 -- common/autotest_common.sh@850 -- # return 0 00:12:54.863 04:04:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:54.863 04:04:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:54.863 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:54.863 04:04:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.863 04:04:09 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:54.863 04:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.863 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:54.863 04:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.863 04:04:09 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:54.863 04:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.863 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:54.863 04:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.863 04:04:09 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:54.863 04:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.863 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:54.863 [2024-04-19 04:04:09.156183] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1982740/0x1986c30) succeed. 00:12:54.863 [2024-04-19 04:04:09.165371] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1983d30/0x19c82c0) succeed. 00:12:54.863 04:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.863 04:04:09 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:54.863 04:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.863 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:54.863 Malloc0 00:12:54.863 04:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.863 04:04:09 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:54.863 04:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.863 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:54.863 04:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.863 04:04:09 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:54.863 04:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.863 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:54.863 04:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.863 04:04:09 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:54.863 04:04:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:54.863 04:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:54.863 [2024-04-19 04:04:09.333033] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:54.863 04:04:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=258655 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@30 -- # READ_PID=258657 00:12:54.864 04:04:09 -- nvmf/common.sh@521 -- # config=() 00:12:54.864 04:04:09 -- nvmf/common.sh@521 -- # local subsystem config 00:12:54.864 04:04:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:54.864 04:04:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:54.864 { 00:12:54.864 "params": { 00:12:54.864 "name": "Nvme$subsystem", 00:12:54.864 "trtype": "$TEST_TRANSPORT", 00:12:54.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:54.864 "adrfam": "ipv4", 00:12:54.864 "trsvcid": "$NVMF_PORT", 00:12:54.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:54.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:54.864 "hdgst": ${hdgst:-false}, 00:12:54.864 "ddgst": ${ddgst:-false} 00:12:54.864 }, 00:12:54.864 "method": "bdev_nvme_attach_controller" 00:12:54.864 } 00:12:54.864 EOF 00:12:54.864 )") 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=258659 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:54.864 04:04:09 -- nvmf/common.sh@521 -- # config=() 00:12:54.864 04:04:09 -- nvmf/common.sh@521 -- # local subsystem config 00:12:54.864 04:04:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:54.864 04:04:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:54.864 { 00:12:54.864 "params": { 00:12:54.864 "name": "Nvme$subsystem", 00:12:54.864 "trtype": "$TEST_TRANSPORT", 00:12:54.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:54.864 "adrfam": "ipv4", 00:12:54.864 "trsvcid": "$NVMF_PORT", 00:12:54.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:54.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:54.864 "hdgst": ${hdgst:-false}, 00:12:54.864 "ddgst": ${ddgst:-false} 00:12:54.864 }, 00:12:54.864 "method": "bdev_nvme_attach_controller" 00:12:54.864 } 00:12:54.864 EOF 00:12:54.864 )") 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=258662 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@35 -- # sync 00:12:54.864 04:04:09 -- nvmf/common.sh@543 -- # cat 00:12:54.864 04:04:09 -- nvmf/common.sh@521 -- # config=() 00:12:54.864 04:04:09 -- nvmf/common.sh@521 -- # local subsystem config 00:12:54.864 04:04:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:54.864 04:04:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:54.864 { 00:12:54.864 "params": { 00:12:54.864 "name": "Nvme$subsystem", 00:12:54.864 "trtype": "$TEST_TRANSPORT", 00:12:54.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:54.864 "adrfam": "ipv4", 00:12:54.864 "trsvcid": "$NVMF_PORT", 00:12:54.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:54.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:54.864 "hdgst": ${hdgst:-false}, 00:12:54.864 "ddgst": ${ddgst:-false} 00:12:54.864 }, 00:12:54.864 "method": "bdev_nvme_attach_controller" 00:12:54.864 } 00:12:54.864 EOF 00:12:54.864 )") 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:54.864 04:04:09 -- nvmf/common.sh@521 -- # config=() 00:12:54.864 04:04:09 -- nvmf/common.sh@543 -- # cat 00:12:54.864 04:04:09 -- nvmf/common.sh@521 -- # local subsystem config 00:12:54.864 04:04:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:54.864 04:04:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:54.864 { 00:12:54.864 "params": { 00:12:54.864 "name": "Nvme$subsystem", 00:12:54.864 "trtype": "$TEST_TRANSPORT", 00:12:54.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:54.864 "adrfam": "ipv4", 00:12:54.864 "trsvcid": "$NVMF_PORT", 00:12:54.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:54.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:54.864 "hdgst": ${hdgst:-false}, 00:12:54.864 "ddgst": ${ddgst:-false} 00:12:54.864 }, 00:12:54.864 "method": "bdev_nvme_attach_controller" 00:12:54.864 } 00:12:54.864 EOF 00:12:54.864 )") 00:12:54.864 04:04:09 -- nvmf/common.sh@543 -- # cat 00:12:54.864 04:04:09 -- target/bdev_io_wait.sh@37 -- # wait 258655 00:12:54.864 04:04:09 -- nvmf/common.sh@543 -- # cat 00:12:54.864 04:04:09 -- nvmf/common.sh@545 -- # jq . 00:12:54.864 04:04:09 -- nvmf/common.sh@545 -- # jq . 00:12:54.864 04:04:09 -- nvmf/common.sh@545 -- # jq . 00:12:54.864 04:04:09 -- nvmf/common.sh@546 -- # IFS=, 00:12:54.864 04:04:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:54.864 "params": { 00:12:54.864 "name": "Nvme1", 00:12:54.864 "trtype": "rdma", 00:12:54.864 "traddr": "192.168.100.8", 00:12:54.864 "adrfam": "ipv4", 00:12:54.864 "trsvcid": "4420", 00:12:54.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:54.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:54.864 "hdgst": false, 00:12:54.864 "ddgst": false 00:12:54.864 }, 00:12:54.864 "method": "bdev_nvme_attach_controller" 00:12:54.864 }' 00:12:54.864 04:04:09 -- nvmf/common.sh@545 -- # jq . 00:12:54.864 04:04:09 -- nvmf/common.sh@546 -- # IFS=, 00:12:54.864 04:04:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:54.864 "params": { 00:12:54.864 "name": "Nvme1", 00:12:54.864 "trtype": "rdma", 00:12:54.864 "traddr": "192.168.100.8", 00:12:54.864 "adrfam": "ipv4", 00:12:54.864 "trsvcid": "4420", 00:12:54.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:54.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:54.864 "hdgst": false, 00:12:54.864 "ddgst": false 00:12:54.864 }, 00:12:54.864 "method": "bdev_nvme_attach_controller" 00:12:54.864 }' 00:12:54.864 04:04:09 -- nvmf/common.sh@546 -- # IFS=, 00:12:54.864 04:04:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:54.864 "params": { 00:12:54.864 "name": "Nvme1", 00:12:54.864 "trtype": "rdma", 00:12:54.864 "traddr": "192.168.100.8", 00:12:54.864 "adrfam": "ipv4", 00:12:54.864 "trsvcid": "4420", 00:12:54.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:54.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:54.864 "hdgst": false, 00:12:54.864 "ddgst": false 00:12:54.864 }, 00:12:54.864 "method": "bdev_nvme_attach_controller" 00:12:54.864 }' 00:12:54.864 04:04:09 -- nvmf/common.sh@546 -- # IFS=, 00:12:54.864 04:04:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:54.864 "params": { 00:12:54.864 "name": "Nvme1", 00:12:54.864 "trtype": "rdma", 00:12:54.864 "traddr": "192.168.100.8", 00:12:54.864 "adrfam": "ipv4", 00:12:54.864 "trsvcid": "4420", 00:12:54.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:54.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:54.864 "hdgst": false, 00:12:54.864 "ddgst": false 00:12:54.864 }, 00:12:54.864 "method": "bdev_nvme_attach_controller" 00:12:54.864 }' 00:12:54.864 [2024-04-19 04:04:09.380864] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:12:54.864 [2024-04-19 04:04:09.380911] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:54.864 [2024-04-19 04:04:09.381620] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:12:54.864 [2024-04-19 04:04:09.381658] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:54.864 [2024-04-19 04:04:09.382118] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:12:54.864 [2024-04-19 04:04:09.382158] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:54.864 [2024-04-19 04:04:09.382381] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:12:54.864 [2024-04-19 04:04:09.382424] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:55.124 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.124 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.124 [2024-04-19 04:04:09.552907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.124 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.124 [2024-04-19 04:04:09.626174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:55.124 [2024-04-19 04:04:09.634177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.383 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.383 [2024-04-19 04:04:09.706836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:55.383 [2024-04-19 04:04:09.733181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.383 [2024-04-19 04:04:09.787426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.383 [2024-04-19 04:04:09.813120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:55.383 [2024-04-19 04:04:09.860478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:55.383 Running I/O for 1 seconds... 00:12:55.383 Running I/O for 1 seconds... 00:12:55.644 Running I/O for 1 seconds... 00:12:55.644 Running I/O for 1 seconds... 00:12:56.592 00:12:56.593 Latency(us) 00:12:56.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.593 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:56.593 Nvme1n1 : 1.00 22017.38 86.01 0.00 0.00 5799.28 3956.43 15437.37 00:12:56.593 =================================================================================================================== 00:12:56.593 Total : 22017.38 86.01 0.00 0.00 5799.28 3956.43 15437.37 00:12:56.593 00:12:56.593 Latency(us) 00:12:56.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.593 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:56.593 Nvme1n1 : 1.01 15782.23 61.65 0.00 0.00 8084.30 5704.06 17185.00 00:12:56.593 =================================================================================================================== 00:12:56.593 Total : 15782.23 61.65 0.00 0.00 8084.30 5704.06 17185.00 00:12:56.593 00:12:56.593 Latency(us) 00:12:56.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.593 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:56.593 Nvme1n1 : 1.00 15930.39 62.23 0.00 0.00 8016.72 3689.43 19418.07 00:12:56.593 =================================================================================================================== 00:12:56.593 Total : 15930.39 62.23 0.00 0.00 8016.72 3689.43 19418.07 00:12:56.593 00:12:56.593 Latency(us) 00:12:56.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.593 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:56.593 Nvme1n1 : 1.00 284211.46 1110.20 0.00 0.00 448.35 175.22 1723.35 00:12:56.593 =================================================================================================================== 00:12:56.593 Total : 284211.46 1110.20 0.00 0.00 448.35 175.22 1723.35 00:12:56.593 04:04:11 -- target/bdev_io_wait.sh@38 -- # wait 258657 00:12:56.853 04:04:11 -- target/bdev_io_wait.sh@39 -- # wait 258659 00:12:56.853 04:04:11 -- target/bdev_io_wait.sh@40 -- # wait 258662 00:12:56.853 04:04:11 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.853 04:04:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:56.853 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:12:56.853 04:04:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:56.853 04:04:11 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:56.853 04:04:11 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:56.853 04:04:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:56.853 04:04:11 -- nvmf/common.sh@117 -- # sync 00:12:56.853 04:04:11 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:56.853 04:04:11 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:56.853 04:04:11 -- nvmf/common.sh@120 -- # set +e 00:12:56.853 04:04:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.853 04:04:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:56.853 rmmod nvme_rdma 00:12:56.853 rmmod nvme_fabrics 00:12:56.853 04:04:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.853 04:04:11 -- nvmf/common.sh@124 -- # set -e 00:12:56.853 04:04:11 -- nvmf/common.sh@125 -- # return 0 00:12:56.853 04:04:11 -- nvmf/common.sh@478 -- # '[' -n 258481 ']' 00:12:56.853 04:04:11 -- nvmf/common.sh@479 -- # killprocess 258481 00:12:56.853 04:04:11 -- common/autotest_common.sh@936 -- # '[' -z 258481 ']' 00:12:56.853 04:04:11 -- common/autotest_common.sh@940 -- # kill -0 258481 00:12:56.853 04:04:11 -- common/autotest_common.sh@941 -- # uname 00:12:56.854 04:04:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:56.854 04:04:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 258481 00:12:56.854 04:04:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:56.854 04:04:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:56.854 04:04:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 258481' 00:12:56.854 killing process with pid 258481 00:12:56.854 04:04:11 -- common/autotest_common.sh@955 -- # kill 258481 00:12:56.854 04:04:11 -- common/autotest_common.sh@960 -- # wait 258481 00:12:57.114 04:04:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:57.114 04:04:11 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:57.114 00:12:57.114 real 0m8.743s 00:12:57.114 user 0m19.596s 00:12:57.114 sys 0m5.178s 00:12:57.114 04:04:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:57.114 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:12:57.114 ************************************ 00:12:57.114 END TEST nvmf_bdev_io_wait 00:12:57.114 ************************************ 00:12:57.114 04:04:11 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:12:57.114 04:04:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:57.114 04:04:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.114 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:12:57.375 ************************************ 00:12:57.375 START TEST nvmf_queue_depth 00:12:57.375 ************************************ 00:12:57.375 04:04:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:12:57.375 * Looking for test storage... 00:12:57.375 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:57.375 04:04:11 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.375 04:04:11 -- nvmf/common.sh@7 -- # uname -s 00:12:57.375 04:04:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.375 04:04:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.375 04:04:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.375 04:04:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.375 04:04:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.375 04:04:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.375 04:04:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.375 04:04:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.375 04:04:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.375 04:04:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.375 04:04:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:57.375 04:04:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:12:57.375 04:04:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.375 04:04:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.375 04:04:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.375 04:04:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.375 04:04:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:57.375 04:04:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.375 04:04:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.375 04:04:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.375 04:04:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.375 04:04:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.375 04:04:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.375 04:04:11 -- paths/export.sh@5 -- # export PATH 00:12:57.375 04:04:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.375 04:04:11 -- nvmf/common.sh@47 -- # : 0 00:12:57.375 04:04:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.375 04:04:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.375 04:04:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.375 04:04:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.375 04:04:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.375 04:04:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.375 04:04:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.375 04:04:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.375 04:04:11 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:57.375 04:04:11 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:57.375 04:04:11 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:57.375 04:04:11 -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:57.375 04:04:11 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:57.375 04:04:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.375 04:04:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:57.375 04:04:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:57.375 04:04:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:57.375 04:04:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.375 04:04:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.375 04:04:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.375 04:04:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:57.375 04:04:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:57.375 04:04:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:57.375 04:04:11 -- common/autotest_common.sh@10 -- # set +x 00:13:02.663 04:04:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:02.663 04:04:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.663 04:04:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.663 04:04:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.663 04:04:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.663 04:04:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.663 04:04:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.663 04:04:16 -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.663 04:04:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.663 04:04:16 -- nvmf/common.sh@296 -- # e810=() 00:13:02.663 04:04:16 -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.663 04:04:16 -- nvmf/common.sh@297 -- # x722=() 00:13:02.663 04:04:16 -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.663 04:04:16 -- nvmf/common.sh@298 -- # mlx=() 00:13:02.663 04:04:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.663 04:04:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.663 04:04:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.663 04:04:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.663 04:04:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.663 04:04:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.664 04:04:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.664 04:04:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.664 04:04:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.664 04:04:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.664 04:04:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.664 04:04:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.664 04:04:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.664 04:04:16 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:02.664 04:04:16 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:02.664 04:04:16 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:02.664 04:04:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.664 04:04:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.664 04:04:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:02.664 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:02.664 04:04:16 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:02.664 04:04:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.664 04:04:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:02.664 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:02.664 04:04:16 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:02.664 04:04:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.664 04:04:16 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.664 04:04:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.664 04:04:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:02.664 04:04:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.664 04:04:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:02.664 Found net devices under 0000:18:00.0: mlx_0_0 00:13:02.664 04:04:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.664 04:04:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.664 04:04:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.664 04:04:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:02.664 04:04:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.664 04:04:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:02.664 Found net devices under 0000:18:00.1: mlx_0_1 00:13:02.664 04:04:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.664 04:04:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:02.664 04:04:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:02.664 04:04:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:02.664 04:04:16 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:02.664 04:04:16 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:02.664 04:04:16 -- nvmf/common.sh@58 -- # uname 00:13:02.664 04:04:16 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:02.664 04:04:16 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:02.664 04:04:16 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:02.664 04:04:16 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:02.664 04:04:16 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:02.664 04:04:16 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:02.664 04:04:16 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:02.664 04:04:16 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:02.664 04:04:16 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:02.664 04:04:16 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:02.664 04:04:16 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:02.664 04:04:16 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:02.664 04:04:16 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:02.664 04:04:16 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:02.664 04:04:16 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:02.664 04:04:17 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:02.664 04:04:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.664 04:04:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.664 04:04:17 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:02.664 04:04:17 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:02.664 04:04:17 -- nvmf/common.sh@105 -- # continue 2 00:13:02.664 04:04:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.664 04:04:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.664 04:04:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:02.664 04:04:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.664 04:04:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:02.664 04:04:17 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:02.664 04:04:17 -- nvmf/common.sh@105 -- # continue 2 00:13:02.664 04:04:17 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:02.664 04:04:17 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:02.664 04:04:17 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.664 04:04:17 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:02.664 04:04:17 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:02.664 04:04:17 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:02.664 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:02.664 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:13:02.664 altname enp24s0f0np0 00:13:02.664 altname ens785f0np0 00:13:02.664 inet 192.168.100.8/24 scope global mlx_0_0 00:13:02.664 valid_lft forever preferred_lft forever 00:13:02.664 04:04:17 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:02.664 04:04:17 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:02.664 04:04:17 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.664 04:04:17 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:02.664 04:04:17 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:02.664 04:04:17 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:02.664 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:02.664 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:13:02.664 altname enp24s0f1np1 00:13:02.664 altname ens785f1np1 00:13:02.664 inet 192.168.100.9/24 scope global mlx_0_1 00:13:02.664 valid_lft forever preferred_lft forever 00:13:02.664 04:04:17 -- nvmf/common.sh@411 -- # return 0 00:13:02.664 04:04:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:02.664 04:04:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:02.664 04:04:17 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:02.664 04:04:17 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:02.664 04:04:17 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:02.664 04:04:17 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:02.664 04:04:17 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:02.664 04:04:17 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:02.664 04:04:17 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:02.664 04:04:17 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:02.664 04:04:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.664 04:04:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.664 04:04:17 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:02.664 04:04:17 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:02.664 04:04:17 -- nvmf/common.sh@105 -- # continue 2 00:13:02.664 04:04:17 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.664 04:04:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.664 04:04:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:02.664 04:04:17 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.664 04:04:17 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:02.664 04:04:17 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:02.664 04:04:17 -- nvmf/common.sh@105 -- # continue 2 00:13:02.664 04:04:17 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:02.664 04:04:17 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:02.664 04:04:17 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.664 04:04:17 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:02.664 04:04:17 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:02.664 04:04:17 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.664 04:04:17 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.664 04:04:17 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:02.664 192.168.100.9' 00:13:02.664 04:04:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:02.664 192.168.100.9' 00:13:02.664 04:04:17 -- nvmf/common.sh@446 -- # head -n 1 00:13:02.664 04:04:17 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:02.664 04:04:17 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:02.664 192.168.100.9' 00:13:02.664 04:04:17 -- nvmf/common.sh@447 -- # tail -n +2 00:13:02.664 04:04:17 -- nvmf/common.sh@447 -- # head -n 1 00:13:02.664 04:04:17 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:02.665 04:04:17 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:02.665 04:04:17 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:02.665 04:04:17 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:02.665 04:04:17 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:02.665 04:04:17 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:02.665 04:04:17 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:02.665 04:04:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:02.665 04:04:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:02.665 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:13:02.665 04:04:17 -- nvmf/common.sh@470 -- # nvmfpid=262314 00:13:02.665 04:04:17 -- nvmf/common.sh@471 -- # waitforlisten 262314 00:13:02.665 04:04:17 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:02.665 04:04:17 -- common/autotest_common.sh@817 -- # '[' -z 262314 ']' 00:13:02.665 04:04:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.665 04:04:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:02.665 04:04:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.665 04:04:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:02.665 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:13:02.665 [2024-04-19 04:04:17.176625] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:13:02.665 [2024-04-19 04:04:17.176673] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.925 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.925 [2024-04-19 04:04:17.230894] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.925 [2024-04-19 04:04:17.304894] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.925 [2024-04-19 04:04:17.304927] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.925 [2024-04-19 04:04:17.304933] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.925 [2024-04-19 04:04:17.304939] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.925 [2024-04-19 04:04:17.304943] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.925 [2024-04-19 04:04:17.304978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.494 04:04:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:03.494 04:04:17 -- common/autotest_common.sh@850 -- # return 0 00:13:03.494 04:04:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:03.494 04:04:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:03.494 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:13:03.494 04:04:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.494 04:04:17 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:03.494 04:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.494 04:04:17 -- common/autotest_common.sh@10 -- # set +x 00:13:03.494 [2024-04-19 04:04:17.995999] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17ee830/0x17f2d20) succeed. 00:13:03.494 [2024-04-19 04:04:18.003762] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17efd30/0x18343b0) succeed. 00:13:03.755 04:04:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.755 04:04:18 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:03.755 04:04:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.755 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:13:03.755 Malloc0 00:13:03.755 04:04:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.755 04:04:18 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:03.755 04:04:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.755 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:13:03.755 04:04:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.755 04:04:18 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.755 04:04:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.755 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:13:03.755 04:04:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.755 04:04:18 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:03.755 04:04:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.755 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:13:03.755 [2024-04-19 04:04:18.090358] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:03.755 04:04:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.755 04:04:18 -- target/queue_depth.sh@30 -- # bdevperf_pid=262483 00:13:03.755 04:04:18 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:03.755 04:04:18 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:03.755 04:04:18 -- target/queue_depth.sh@33 -- # waitforlisten 262483 /var/tmp/bdevperf.sock 00:13:03.755 04:04:18 -- common/autotest_common.sh@817 -- # '[' -z 262483 ']' 00:13:03.755 04:04:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:03.755 04:04:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:03.755 04:04:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:03.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:03.755 04:04:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:03.755 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:13:03.755 [2024-04-19 04:04:18.134127] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:13:03.755 [2024-04-19 04:04:18.134164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262483 ] 00:13:03.755 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.755 [2024-04-19 04:04:18.182345] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.755 [2024-04-19 04:04:18.249033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.698 04:04:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:04.699 04:04:18 -- common/autotest_common.sh@850 -- # return 0 00:13:04.699 04:04:18 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:04.699 04:04:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.699 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:13:04.699 NVMe0n1 00:13:04.699 04:04:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.699 04:04:18 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:04.699 Running I/O for 10 seconds... 00:13:14.712 00:13:14.712 Latency(us) 00:13:14.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.712 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:14.712 Verification LBA range: start 0x0 length 0x4000 00:13:14.712 NVMe0n1 : 10.04 19458.55 76.01 0.00 0.00 52478.60 16214.09 33399.09 00:13:14.712 =================================================================================================================== 00:13:14.712 Total : 19458.55 76.01 0.00 0.00 52478.60 16214.09 33399.09 00:13:14.712 0 00:13:14.712 04:04:29 -- target/queue_depth.sh@39 -- # killprocess 262483 00:13:14.712 04:04:29 -- common/autotest_common.sh@936 -- # '[' -z 262483 ']' 00:13:14.712 04:04:29 -- common/autotest_common.sh@940 -- # kill -0 262483 00:13:14.712 04:04:29 -- common/autotest_common.sh@941 -- # uname 00:13:14.712 04:04:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:14.712 04:04:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 262483 00:13:14.712 04:04:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:14.712 04:04:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:14.712 04:04:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 262483' 00:13:14.712 killing process with pid 262483 00:13:14.712 04:04:29 -- common/autotest_common.sh@955 -- # kill 262483 00:13:14.712 Received shutdown signal, test time was about 10.000000 seconds 00:13:14.712 00:13:14.712 Latency(us) 00:13:14.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.712 =================================================================================================================== 00:13:14.712 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:14.712 04:04:29 -- common/autotest_common.sh@960 -- # wait 262483 00:13:14.971 04:04:29 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:14.971 04:04:29 -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:14.971 04:04:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:14.971 04:04:29 -- nvmf/common.sh@117 -- # sync 00:13:14.971 04:04:29 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:14.971 04:04:29 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:14.971 04:04:29 -- nvmf/common.sh@120 -- # set +e 00:13:14.971 04:04:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.971 04:04:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:14.971 rmmod nvme_rdma 00:13:14.971 rmmod nvme_fabrics 00:13:14.971 04:04:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.971 04:04:29 -- nvmf/common.sh@124 -- # set -e 00:13:14.971 04:04:29 -- nvmf/common.sh@125 -- # return 0 00:13:14.971 04:04:29 -- nvmf/common.sh@478 -- # '[' -n 262314 ']' 00:13:14.971 04:04:29 -- nvmf/common.sh@479 -- # killprocess 262314 00:13:14.971 04:04:29 -- common/autotest_common.sh@936 -- # '[' -z 262314 ']' 00:13:14.971 04:04:29 -- common/autotest_common.sh@940 -- # kill -0 262314 00:13:14.971 04:04:29 -- common/autotest_common.sh@941 -- # uname 00:13:14.971 04:04:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:14.971 04:04:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 262314 00:13:14.971 04:04:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:14.971 04:04:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:14.971 04:04:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 262314' 00:13:14.971 killing process with pid 262314 00:13:14.971 04:04:29 -- common/autotest_common.sh@955 -- # kill 262314 00:13:14.971 04:04:29 -- common/autotest_common.sh@960 -- # wait 262314 00:13:15.230 04:04:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:15.230 04:04:29 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:15.230 00:13:15.230 real 0m18.018s 00:13:15.230 user 0m25.673s 00:13:15.230 sys 0m4.520s 00:13:15.230 04:04:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:15.230 04:04:29 -- common/autotest_common.sh@10 -- # set +x 00:13:15.230 ************************************ 00:13:15.230 END TEST nvmf_queue_depth 00:13:15.230 ************************************ 00:13:15.230 04:04:29 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:15.230 04:04:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:15.230 04:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:15.230 04:04:29 -- common/autotest_common.sh@10 -- # set +x 00:13:15.491 ************************************ 00:13:15.491 START TEST nvmf_multipath 00:13:15.491 ************************************ 00:13:15.491 04:04:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:15.491 * Looking for test storage... 00:13:15.491 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:15.491 04:04:29 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.491 04:04:29 -- nvmf/common.sh@7 -- # uname -s 00:13:15.491 04:04:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.491 04:04:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.491 04:04:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.491 04:04:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.491 04:04:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.491 04:04:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.491 04:04:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.491 04:04:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.491 04:04:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.491 04:04:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.491 04:04:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:15.491 04:04:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:13:15.491 04:04:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.491 04:04:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.491 04:04:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.491 04:04:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.491 04:04:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:15.491 04:04:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.491 04:04:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.491 04:04:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.491 04:04:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.491 04:04:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.491 04:04:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.491 04:04:29 -- paths/export.sh@5 -- # export PATH 00:13:15.491 04:04:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.491 04:04:29 -- nvmf/common.sh@47 -- # : 0 00:13:15.491 04:04:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.491 04:04:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.491 04:04:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.491 04:04:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.491 04:04:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.491 04:04:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.491 04:04:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.491 04:04:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.491 04:04:29 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.491 04:04:29 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.491 04:04:29 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:15.491 04:04:29 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:15.491 04:04:29 -- target/multipath.sh@43 -- # nvmftestinit 00:13:15.491 04:04:29 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:15.491 04:04:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.491 04:04:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:15.491 04:04:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:15.491 04:04:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:15.491 04:04:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.491 04:04:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.491 04:04:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.491 04:04:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:15.491 04:04:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:15.491 04:04:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.491 04:04:30 -- common/autotest_common.sh@10 -- # set +x 00:13:20.776 04:04:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:20.776 04:04:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.776 04:04:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.776 04:04:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.776 04:04:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.776 04:04:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.776 04:04:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.776 04:04:35 -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.776 04:04:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.776 04:04:35 -- nvmf/common.sh@296 -- # e810=() 00:13:20.776 04:04:35 -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.776 04:04:35 -- nvmf/common.sh@297 -- # x722=() 00:13:20.776 04:04:35 -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.776 04:04:35 -- nvmf/common.sh@298 -- # mlx=() 00:13:20.776 04:04:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.776 04:04:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.776 04:04:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.776 04:04:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.776 04:04:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.776 04:04:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.776 04:04:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.776 04:04:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.776 04:04:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.776 04:04:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.776 04:04:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.776 04:04:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.776 04:04:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.776 04:04:35 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:20.776 04:04:35 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:20.776 04:04:35 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:20.776 04:04:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.776 04:04:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.776 04:04:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:20.776 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:20.776 04:04:35 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.776 04:04:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.776 04:04:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:20.776 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:20.776 04:04:35 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:20.776 04:04:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.776 04:04:35 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.776 04:04:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.776 04:04:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:20.776 04:04:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.776 04:04:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:20.776 Found net devices under 0000:18:00.0: mlx_0_0 00:13:20.776 04:04:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.776 04:04:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.776 04:04:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.776 04:04:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:20.776 04:04:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.776 04:04:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:20.776 Found net devices under 0000:18:00.1: mlx_0_1 00:13:20.776 04:04:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.776 04:04:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:20.776 04:04:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:20.776 04:04:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:20.776 04:04:35 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:20.777 04:04:35 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:20.777 04:04:35 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:20.777 04:04:35 -- nvmf/common.sh@58 -- # uname 00:13:20.777 04:04:35 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:20.777 04:04:35 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:20.777 04:04:35 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:20.777 04:04:35 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:20.777 04:04:35 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:20.777 04:04:35 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:20.777 04:04:35 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:20.777 04:04:35 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:20.777 04:04:35 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:20.777 04:04:35 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:20.777 04:04:35 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:20.777 04:04:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:20.777 04:04:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:20.777 04:04:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:20.777 04:04:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:20.777 04:04:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:20.777 04:04:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:20.777 04:04:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.777 04:04:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:20.777 04:04:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:20.777 04:04:35 -- nvmf/common.sh@105 -- # continue 2 00:13:20.777 04:04:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:20.777 04:04:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.777 04:04:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:20.777 04:04:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:20.777 04:04:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:20.777 04:04:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:20.777 04:04:35 -- nvmf/common.sh@105 -- # continue 2 00:13:20.777 04:04:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:20.777 04:04:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:20.777 04:04:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:20.777 04:04:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:20.777 04:04:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:20.777 04:04:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.777 04:04:35 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:20.777 04:04:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:20.777 04:04:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:20.777 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:20.777 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:13:20.777 altname enp24s0f0np0 00:13:20.777 altname ens785f0np0 00:13:20.777 inet 192.168.100.8/24 scope global mlx_0_0 00:13:20.777 valid_lft forever preferred_lft forever 00:13:20.777 04:04:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:20.777 04:04:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:20.777 04:04:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:20.777 04:04:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:20.777 04:04:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:20.777 04:04:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:21.038 04:04:35 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:21.038 04:04:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:21.038 04:04:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:21.038 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:21.038 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:13:21.038 altname enp24s0f1np1 00:13:21.038 altname ens785f1np1 00:13:21.038 inet 192.168.100.9/24 scope global mlx_0_1 00:13:21.038 valid_lft forever preferred_lft forever 00:13:21.038 04:04:35 -- nvmf/common.sh@411 -- # return 0 00:13:21.038 04:04:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:21.038 04:04:35 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:21.038 04:04:35 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:21.038 04:04:35 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:21.038 04:04:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:21.038 04:04:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:21.038 04:04:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:21.038 04:04:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:21.038 04:04:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:21.038 04:04:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:21.038 04:04:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:21.038 04:04:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:21.038 04:04:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:21.038 04:04:35 -- nvmf/common.sh@105 -- # continue 2 00:13:21.038 04:04:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:21.038 04:04:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:21.038 04:04:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:21.038 04:04:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:21.038 04:04:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:21.038 04:04:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:21.038 04:04:35 -- nvmf/common.sh@105 -- # continue 2 00:13:21.038 04:04:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:21.038 04:04:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:21.038 04:04:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:21.038 04:04:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:21.038 04:04:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:21.038 04:04:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:21.038 04:04:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:21.038 04:04:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:21.038 04:04:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:21.038 04:04:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:21.038 04:04:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:21.038 04:04:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:21.038 04:04:35 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:21.038 192.168.100.9' 00:13:21.038 04:04:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:21.038 192.168.100.9' 00:13:21.038 04:04:35 -- nvmf/common.sh@446 -- # head -n 1 00:13:21.038 04:04:35 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:21.038 04:04:35 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:21.038 192.168.100.9' 00:13:21.038 04:04:35 -- nvmf/common.sh@447 -- # tail -n +2 00:13:21.038 04:04:35 -- nvmf/common.sh@447 -- # head -n 1 00:13:21.038 04:04:35 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:21.038 04:04:35 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:21.038 04:04:35 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:21.038 04:04:35 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:13:21.038 04:04:35 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:13:21.038 04:04:35 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:13:21.038 run this test only with TCP transport for now 00:13:21.038 04:04:35 -- target/multipath.sh@53 -- # nvmftestfini 00:13:21.038 04:04:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:21.038 04:04:35 -- nvmf/common.sh@117 -- # sync 00:13:21.038 04:04:35 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@120 -- # set +e 00:13:21.038 04:04:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.038 04:04:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:21.038 rmmod nvme_rdma 00:13:21.038 rmmod nvme_fabrics 00:13:21.038 04:04:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.038 04:04:35 -- nvmf/common.sh@124 -- # set -e 00:13:21.038 04:04:35 -- nvmf/common.sh@125 -- # return 0 00:13:21.038 04:04:35 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:21.038 04:04:35 -- target/multipath.sh@54 -- # exit 0 00:13:21.038 04:04:35 -- target/multipath.sh@1 -- # nvmftestfini 00:13:21.038 04:04:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:21.038 04:04:35 -- nvmf/common.sh@117 -- # sync 00:13:21.038 04:04:35 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@120 -- # set +e 00:13:21.038 04:04:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.038 04:04:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:21.038 04:04:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.038 04:04:35 -- nvmf/common.sh@124 -- # set -e 00:13:21.038 04:04:35 -- nvmf/common.sh@125 -- # return 0 00:13:21.038 04:04:35 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:21.038 04:04:35 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:21.038 00:13:21.038 real 0m5.576s 00:13:21.038 user 0m1.569s 00:13:21.038 sys 0m4.080s 00:13:21.038 04:04:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:21.038 04:04:35 -- common/autotest_common.sh@10 -- # set +x 00:13:21.038 ************************************ 00:13:21.038 END TEST nvmf_multipath 00:13:21.038 ************************************ 00:13:21.039 04:04:35 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:21.039 04:04:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:21.039 04:04:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.039 04:04:35 -- common/autotest_common.sh@10 -- # set +x 00:13:21.299 ************************************ 00:13:21.299 START TEST nvmf_zcopy 00:13:21.299 ************************************ 00:13:21.299 04:04:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:21.299 * Looking for test storage... 00:13:21.299 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:21.299 04:04:35 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.299 04:04:35 -- nvmf/common.sh@7 -- # uname -s 00:13:21.299 04:04:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.299 04:04:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.299 04:04:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.299 04:04:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.299 04:04:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.299 04:04:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.299 04:04:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.299 04:04:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.299 04:04:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.299 04:04:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.299 04:04:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:21.299 04:04:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:13:21.299 04:04:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.299 04:04:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.299 04:04:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.299 04:04:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.299 04:04:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:21.299 04:04:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.299 04:04:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.299 04:04:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.299 04:04:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.299 04:04:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.299 04:04:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.299 04:04:35 -- paths/export.sh@5 -- # export PATH 00:13:21.299 04:04:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.299 04:04:35 -- nvmf/common.sh@47 -- # : 0 00:13:21.299 04:04:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.299 04:04:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.299 04:04:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.299 04:04:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.299 04:04:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.299 04:04:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.299 04:04:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.299 04:04:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.299 04:04:35 -- target/zcopy.sh@12 -- # nvmftestinit 00:13:21.299 04:04:35 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:21.299 04:04:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.299 04:04:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:21.299 04:04:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:21.299 04:04:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:21.299 04:04:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.300 04:04:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.300 04:04:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.300 04:04:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:21.300 04:04:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:21.300 04:04:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:21.300 04:04:35 -- common/autotest_common.sh@10 -- # set +x 00:13:26.579 04:04:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:26.579 04:04:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.579 04:04:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.579 04:04:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.579 04:04:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.579 04:04:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.579 04:04:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.579 04:04:40 -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.579 04:04:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.579 04:04:40 -- nvmf/common.sh@296 -- # e810=() 00:13:26.579 04:04:40 -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.579 04:04:40 -- nvmf/common.sh@297 -- # x722=() 00:13:26.579 04:04:40 -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.579 04:04:40 -- nvmf/common.sh@298 -- # mlx=() 00:13:26.579 04:04:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.579 04:04:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.579 04:04:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.579 04:04:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.579 04:04:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.579 04:04:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.579 04:04:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.579 04:04:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.579 04:04:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.579 04:04:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.579 04:04:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.579 04:04:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.579 04:04:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.579 04:04:40 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:26.579 04:04:40 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:26.580 04:04:40 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:26.580 04:04:40 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:26.580 04:04:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.580 04:04:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:26.580 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:26.580 04:04:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:26.580 04:04:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:26.580 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:26.580 04:04:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:26.580 04:04:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.580 04:04:40 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.580 04:04:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:26.580 04:04:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.580 04:04:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:26.580 Found net devices under 0000:18:00.0: mlx_0_0 00:13:26.580 04:04:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.580 04:04:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.580 04:04:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:26.580 04:04:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.580 04:04:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:26.580 Found net devices under 0000:18:00.1: mlx_0_1 00:13:26.580 04:04:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.580 04:04:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:26.580 04:04:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:26.580 04:04:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:26.580 04:04:40 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:26.580 04:04:40 -- nvmf/common.sh@58 -- # uname 00:13:26.580 04:04:40 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:26.580 04:04:40 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:26.580 04:04:40 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:26.580 04:04:40 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:26.580 04:04:40 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:26.580 04:04:40 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:26.580 04:04:40 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:26.580 04:04:40 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:26.580 04:04:40 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:26.580 04:04:40 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:26.580 04:04:40 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:26.580 04:04:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:26.580 04:04:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:26.580 04:04:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:26.580 04:04:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:26.580 04:04:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:26.580 04:04:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:26.580 04:04:40 -- nvmf/common.sh@105 -- # continue 2 00:13:26.580 04:04:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:26.580 04:04:40 -- nvmf/common.sh@105 -- # continue 2 00:13:26.580 04:04:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:26.580 04:04:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:26.580 04:04:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:26.580 04:04:40 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:26.580 04:04:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:26.580 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:26.580 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:13:26.580 altname enp24s0f0np0 00:13:26.580 altname ens785f0np0 00:13:26.580 inet 192.168.100.8/24 scope global mlx_0_0 00:13:26.580 valid_lft forever preferred_lft forever 00:13:26.580 04:04:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:26.580 04:04:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:26.580 04:04:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:26.580 04:04:40 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:26.580 04:04:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:26.580 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:26.580 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:13:26.580 altname enp24s0f1np1 00:13:26.580 altname ens785f1np1 00:13:26.580 inet 192.168.100.9/24 scope global mlx_0_1 00:13:26.580 valid_lft forever preferred_lft forever 00:13:26.580 04:04:40 -- nvmf/common.sh@411 -- # return 0 00:13:26.580 04:04:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:26.580 04:04:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:26.580 04:04:40 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:26.580 04:04:40 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:26.580 04:04:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:26.580 04:04:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:26.580 04:04:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:26.580 04:04:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:26.580 04:04:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:26.580 04:04:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:26.580 04:04:40 -- nvmf/common.sh@105 -- # continue 2 00:13:26.580 04:04:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:26.580 04:04:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:26.580 04:04:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:26.580 04:04:40 -- nvmf/common.sh@105 -- # continue 2 00:13:26.580 04:04:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:26.580 04:04:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:26.580 04:04:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:26.580 04:04:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:26.580 04:04:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:26.580 04:04:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:26.580 04:04:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:26.580 04:04:40 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:26.580 192.168.100.9' 00:13:26.580 04:04:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:26.580 192.168.100.9' 00:13:26.580 04:04:40 -- nvmf/common.sh@446 -- # head -n 1 00:13:26.580 04:04:40 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:26.580 04:04:40 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:26.580 192.168.100.9' 00:13:26.580 04:04:40 -- nvmf/common.sh@447 -- # tail -n +2 00:13:26.580 04:04:40 -- nvmf/common.sh@447 -- # head -n 1 00:13:26.580 04:04:40 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:26.580 04:04:40 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:26.580 04:04:40 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:26.580 04:04:40 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:26.580 04:04:40 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:26.580 04:04:40 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:26.580 04:04:40 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:26.580 04:04:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:26.580 04:04:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:26.581 04:04:40 -- common/autotest_common.sh@10 -- # set +x 00:13:26.581 04:04:40 -- nvmf/common.sh@470 -- # nvmfpid=271067 00:13:26.581 04:04:40 -- nvmf/common.sh@471 -- # waitforlisten 271067 00:13:26.581 04:04:40 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:26.581 04:04:40 -- common/autotest_common.sh@817 -- # '[' -z 271067 ']' 00:13:26.581 04:04:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.581 04:04:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:26.581 04:04:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.581 04:04:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:26.581 04:04:40 -- common/autotest_common.sh@10 -- # set +x 00:13:26.581 [2024-04-19 04:04:40.699719] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:13:26.581 [2024-04-19 04:04:40.699767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.581 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.581 [2024-04-19 04:04:40.751077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.581 [2024-04-19 04:04:40.817769] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.581 [2024-04-19 04:04:40.817809] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.581 [2024-04-19 04:04:40.817814] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.581 [2024-04-19 04:04:40.817819] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.581 [2024-04-19 04:04:40.817824] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.581 [2024-04-19 04:04:40.817839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.151 04:04:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:27.151 04:04:41 -- common/autotest_common.sh@850 -- # return 0 00:13:27.151 04:04:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:27.151 04:04:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:27.151 04:04:41 -- common/autotest_common.sh@10 -- # set +x 00:13:27.151 04:04:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.151 04:04:41 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:13:27.151 04:04:41 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:13:27.151 Unsupported transport: rdma 00:13:27.151 04:04:41 -- target/zcopy.sh@17 -- # exit 0 00:13:27.151 04:04:41 -- target/zcopy.sh@1 -- # process_shm --id 0 00:13:27.151 04:04:41 -- common/autotest_common.sh@794 -- # type=--id 00:13:27.151 04:04:41 -- common/autotest_common.sh@795 -- # id=0 00:13:27.151 04:04:41 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:13:27.151 04:04:41 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:27.151 04:04:41 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:13:27.151 04:04:41 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:13:27.151 04:04:41 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:13:27.151 04:04:41 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:27.151 nvmf_trace.0 00:13:27.151 04:04:41 -- common/autotest_common.sh@809 -- # return 0 00:13:27.151 04:04:41 -- target/zcopy.sh@1 -- # nvmftestfini 00:13:27.151 04:04:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:27.151 04:04:41 -- nvmf/common.sh@117 -- # sync 00:13:27.151 04:04:41 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:27.151 04:04:41 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:27.151 04:04:41 -- nvmf/common.sh@120 -- # set +e 00:13:27.151 04:04:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:27.151 04:04:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:27.151 rmmod nvme_rdma 00:13:27.151 rmmod nvme_fabrics 00:13:27.151 04:04:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:27.151 04:04:41 -- nvmf/common.sh@124 -- # set -e 00:13:27.151 04:04:41 -- nvmf/common.sh@125 -- # return 0 00:13:27.151 04:04:41 -- nvmf/common.sh@478 -- # '[' -n 271067 ']' 00:13:27.151 04:04:41 -- nvmf/common.sh@479 -- # killprocess 271067 00:13:27.151 04:04:41 -- common/autotest_common.sh@936 -- # '[' -z 271067 ']' 00:13:27.151 04:04:41 -- common/autotest_common.sh@940 -- # kill -0 271067 00:13:27.151 04:04:41 -- common/autotest_common.sh@941 -- # uname 00:13:27.151 04:04:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:27.151 04:04:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 271067 00:13:27.151 04:04:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:27.151 04:04:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:27.151 04:04:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 271067' 00:13:27.151 killing process with pid 271067 00:13:27.151 04:04:41 -- common/autotest_common.sh@955 -- # kill 271067 00:13:27.151 04:04:41 -- common/autotest_common.sh@960 -- # wait 271067 00:13:27.412 04:04:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:27.412 04:04:41 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:27.412 00:13:27.412 real 0m6.189s 00:13:27.412 user 0m2.755s 00:13:27.412 sys 0m3.953s 00:13:27.412 04:04:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:27.412 04:04:41 -- common/autotest_common.sh@10 -- # set +x 00:13:27.412 ************************************ 00:13:27.412 END TEST nvmf_zcopy 00:13:27.412 ************************************ 00:13:27.412 04:04:41 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:27.412 04:04:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:27.412 04:04:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:27.412 04:04:41 -- common/autotest_common.sh@10 -- # set +x 00:13:27.673 ************************************ 00:13:27.673 START TEST nvmf_nmic 00:13:27.673 ************************************ 00:13:27.673 04:04:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:27.673 * Looking for test storage... 00:13:27.673 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:27.673 04:04:42 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.673 04:04:42 -- nvmf/common.sh@7 -- # uname -s 00:13:27.673 04:04:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.673 04:04:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.673 04:04:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.673 04:04:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.673 04:04:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.673 04:04:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.673 04:04:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.673 04:04:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.673 04:04:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.673 04:04:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.673 04:04:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:27.673 04:04:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:13:27.673 04:04:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.673 04:04:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.673 04:04:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.673 04:04:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.673 04:04:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:27.673 04:04:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.673 04:04:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.673 04:04:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.673 04:04:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.673 04:04:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.673 04:04:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.673 04:04:42 -- paths/export.sh@5 -- # export PATH 00:13:27.673 04:04:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.673 04:04:42 -- nvmf/common.sh@47 -- # : 0 00:13:27.673 04:04:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.673 04:04:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.673 04:04:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.673 04:04:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.673 04:04:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.673 04:04:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.673 04:04:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.673 04:04:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.673 04:04:42 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.673 04:04:42 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.673 04:04:42 -- target/nmic.sh@14 -- # nvmftestinit 00:13:27.673 04:04:42 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:27.673 04:04:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.673 04:04:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:27.673 04:04:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:27.673 04:04:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:27.673 04:04:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.673 04:04:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.673 04:04:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.673 04:04:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:27.673 04:04:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:27.673 04:04:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.673 04:04:42 -- common/autotest_common.sh@10 -- # set +x 00:13:34.257 04:04:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:34.257 04:04:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:34.257 04:04:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:34.257 04:04:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:34.257 04:04:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:34.257 04:04:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:34.257 04:04:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:34.257 04:04:47 -- nvmf/common.sh@295 -- # net_devs=() 00:13:34.257 04:04:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:34.257 04:04:47 -- nvmf/common.sh@296 -- # e810=() 00:13:34.257 04:04:47 -- nvmf/common.sh@296 -- # local -ga e810 00:13:34.257 04:04:47 -- nvmf/common.sh@297 -- # x722=() 00:13:34.257 04:04:47 -- nvmf/common.sh@297 -- # local -ga x722 00:13:34.257 04:04:47 -- nvmf/common.sh@298 -- # mlx=() 00:13:34.257 04:04:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:34.257 04:04:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.257 04:04:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.257 04:04:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.257 04:04:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.257 04:04:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.257 04:04:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.257 04:04:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.257 04:04:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.257 04:04:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.257 04:04:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.257 04:04:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.257 04:04:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:34.257 04:04:47 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:34.257 04:04:47 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:34.257 04:04:47 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:34.257 04:04:47 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:34.257 04:04:47 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:34.257 04:04:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:34.257 04:04:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.257 04:04:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:34.257 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:34.257 04:04:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:34.257 04:04:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:34.257 04:04:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:34.257 04:04:47 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:34.257 04:04:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:34.257 04:04:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:34.257 04:04:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.257 04:04:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:34.257 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:34.257 04:04:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:34.258 04:04:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:34.258 04:04:47 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.258 04:04:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:34.258 04:04:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.258 04:04:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:34.258 Found net devices under 0000:18:00.0: mlx_0_0 00:13:34.258 04:04:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.258 04:04:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.258 04:04:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:34.258 04:04:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.258 04:04:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:34.258 Found net devices under 0000:18:00.1: mlx_0_1 00:13:34.258 04:04:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.258 04:04:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:34.258 04:04:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:34.258 04:04:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:34.258 04:04:47 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:34.258 04:04:47 -- nvmf/common.sh@58 -- # uname 00:13:34.258 04:04:47 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:34.258 04:04:47 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:34.258 04:04:47 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:34.258 04:04:47 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:34.258 04:04:47 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:34.258 04:04:47 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:34.258 04:04:47 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:34.258 04:04:47 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:34.258 04:04:47 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:34.258 04:04:47 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:34.258 04:04:47 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:34.258 04:04:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:34.258 04:04:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:34.258 04:04:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:34.258 04:04:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:34.258 04:04:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:34.258 04:04:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:34.258 04:04:47 -- nvmf/common.sh@105 -- # continue 2 00:13:34.258 04:04:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:34.258 04:04:47 -- nvmf/common.sh@105 -- # continue 2 00:13:34.258 04:04:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:34.258 04:04:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:34.258 04:04:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:34.258 04:04:47 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:34.258 04:04:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:34.258 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:34.258 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:13:34.258 altname enp24s0f0np0 00:13:34.258 altname ens785f0np0 00:13:34.258 inet 192.168.100.8/24 scope global mlx_0_0 00:13:34.258 valid_lft forever preferred_lft forever 00:13:34.258 04:04:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:34.258 04:04:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:34.258 04:04:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:34.258 04:04:47 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:34.258 04:04:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:34.258 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:34.258 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:13:34.258 altname enp24s0f1np1 00:13:34.258 altname ens785f1np1 00:13:34.258 inet 192.168.100.9/24 scope global mlx_0_1 00:13:34.258 valid_lft forever preferred_lft forever 00:13:34.258 04:04:47 -- nvmf/common.sh@411 -- # return 0 00:13:34.258 04:04:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:34.258 04:04:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:34.258 04:04:47 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:34.258 04:04:47 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:34.258 04:04:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:34.258 04:04:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:34.258 04:04:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:34.258 04:04:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:34.258 04:04:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:34.258 04:04:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:34.258 04:04:47 -- nvmf/common.sh@105 -- # continue 2 00:13:34.258 04:04:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:34.258 04:04:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:34.258 04:04:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:34.258 04:04:47 -- nvmf/common.sh@105 -- # continue 2 00:13:34.258 04:04:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:34.258 04:04:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:34.258 04:04:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:34.258 04:04:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:34.258 04:04:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:34.258 04:04:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:34.258 04:04:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:34.258 04:04:47 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:34.258 192.168.100.9' 00:13:34.258 04:04:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:34.258 192.168.100.9' 00:13:34.258 04:04:47 -- nvmf/common.sh@446 -- # head -n 1 00:13:34.258 04:04:47 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:34.258 04:04:47 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:34.258 192.168.100.9' 00:13:34.258 04:04:47 -- nvmf/common.sh@447 -- # tail -n +2 00:13:34.258 04:04:47 -- nvmf/common.sh@447 -- # head -n 1 00:13:34.258 04:04:47 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:34.258 04:04:47 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:34.258 04:04:47 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:34.258 04:04:47 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:34.258 04:04:47 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:34.258 04:04:47 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:34.258 04:04:47 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:34.258 04:04:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:34.258 04:04:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:34.258 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:13:34.258 04:04:47 -- nvmf/common.sh@470 -- # nvmfpid=274546 00:13:34.258 04:04:47 -- nvmf/common.sh@471 -- # waitforlisten 274546 00:13:34.258 04:04:47 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.258 04:04:47 -- common/autotest_common.sh@817 -- # '[' -z 274546 ']' 00:13:34.258 04:04:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.258 04:04:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:34.258 04:04:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.258 04:04:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:34.258 04:04:47 -- common/autotest_common.sh@10 -- # set +x 00:13:34.258 [2024-04-19 04:04:47.809353] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:13:34.258 [2024-04-19 04:04:47.809395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.258 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.258 [2024-04-19 04:04:47.863459] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.258 [2024-04-19 04:04:47.934703] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.259 [2024-04-19 04:04:47.934742] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.259 [2024-04-19 04:04:47.934749] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.259 [2024-04-19 04:04:47.934754] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.259 [2024-04-19 04:04:47.934758] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.259 [2024-04-19 04:04:47.934794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.259 [2024-04-19 04:04:47.934812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.259 [2024-04-19 04:04:47.934883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.259 [2024-04-19 04:04:47.934885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.259 04:04:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:34.259 04:04:48 -- common/autotest_common.sh@850 -- # return 0 00:13:34.259 04:04:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:34.259 04:04:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:34.259 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.259 04:04:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.259 04:04:48 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:34.259 04:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.259 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.259 [2024-04-19 04:04:48.643914] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x104b6c0/0x104fbb0) succeed. 00:13:34.259 [2024-04-19 04:04:48.653147] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x104ccb0/0x1091240) succeed. 00:13:34.259 04:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.259 04:04:48 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:34.259 04:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.259 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.259 Malloc0 00:13:34.259 04:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.259 04:04:48 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:34.259 04:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.259 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.519 04:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.519 04:04:48 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:34.519 04:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.519 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.519 04:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.519 04:04:48 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:34.519 04:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.519 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.519 [2024-04-19 04:04:48.808216] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:34.519 04:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.519 04:04:48 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:34.519 test case1: single bdev can't be used in multiple subsystems 00:13:34.519 04:04:48 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:34.519 04:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.519 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.519 04:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.519 04:04:48 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:34.519 04:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.519 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.519 04:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.519 04:04:48 -- target/nmic.sh@28 -- # nmic_status=0 00:13:34.519 04:04:48 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:34.519 04:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.519 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.519 [2024-04-19 04:04:48.831988] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:34.519 [2024-04-19 04:04:48.832005] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:34.519 [2024-04-19 04:04:48.832012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:34.519 request: 00:13:34.519 { 00:13:34.519 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:34.519 "namespace": { 00:13:34.519 "bdev_name": "Malloc0", 00:13:34.519 "no_auto_visible": false 00:13:34.519 }, 00:13:34.519 "method": "nvmf_subsystem_add_ns", 00:13:34.519 "req_id": 1 00:13:34.519 } 00:13:34.519 Got JSON-RPC error response 00:13:34.519 response: 00:13:34.519 { 00:13:34.519 "code": -32602, 00:13:34.519 "message": "Invalid parameters" 00:13:34.519 } 00:13:34.519 04:04:48 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:34.519 04:04:48 -- target/nmic.sh@29 -- # nmic_status=1 00:13:34.519 04:04:48 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:34.519 04:04:48 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:34.519 Adding namespace failed - expected result. 00:13:34.519 04:04:48 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:34.519 test case2: host connect to nvmf target in multiple paths 00:13:34.519 04:04:48 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:13:34.519 04:04:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.519 04:04:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.519 [2024-04-19 04:04:48.844033] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:13:34.519 04:04:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.520 04:04:48 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:35.458 04:04:49 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:13:36.399 04:04:50 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.399 04:04:50 -- common/autotest_common.sh@1184 -- # local i=0 00:13:36.399 04:04:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.399 04:04:50 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:36.399 04:04:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:38.303 04:04:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:38.303 04:04:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:38.303 04:04:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.303 04:04:52 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:38.303 04:04:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.303 04:04:52 -- common/autotest_common.sh@1194 -- # return 0 00:13:38.303 04:04:52 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:38.303 [global] 00:13:38.303 thread=1 00:13:38.303 invalidate=1 00:13:38.303 rw=write 00:13:38.303 time_based=1 00:13:38.303 runtime=1 00:13:38.303 ioengine=libaio 00:13:38.303 direct=1 00:13:38.303 bs=4096 00:13:38.303 iodepth=1 00:13:38.303 norandommap=0 00:13:38.303 numjobs=1 00:13:38.303 00:13:38.303 verify_dump=1 00:13:38.303 verify_backlog=512 00:13:38.303 verify_state_save=0 00:13:38.303 do_verify=1 00:13:38.303 verify=crc32c-intel 00:13:38.303 [job0] 00:13:38.303 filename=/dev/nvme0n1 00:13:38.303 Could not set queue depth (nvme0n1) 00:13:38.893 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:38.893 fio-3.35 00:13:38.893 Starting 1 thread 00:13:39.832 00:13:39.832 job0: (groupid=0, jobs=1): err= 0: pid=275573: Fri Apr 19 04:04:54 2024 00:13:39.832 read: IOPS=7790, BW=30.4MiB/s (31.9MB/s)(30.5MiB/1001msec) 00:13:39.832 slat (nsec): min=5970, max=29281, avg=6749.04, stdev=668.04 00:13:39.832 clat (usec): min=31, max=200, avg=53.89, stdev= 3.83 00:13:39.832 lat (usec): min=51, max=207, avg=60.64, stdev= 3.87 00:13:39.832 clat percentiles (usec): 00:13:39.832 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 51], 00:13:39.832 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 55], 00:13:39.832 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 59], 95.00th=[ 60], 00:13:39.832 | 99.00th=[ 63], 99.50th=[ 64], 99.90th=[ 69], 99.95th=[ 70], 00:13:39.832 | 99.99th=[ 202] 00:13:39.832 write: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec); 0 zone resets 00:13:39.832 slat (nsec): min=8184, max=51159, avg=8982.40, stdev=1143.26 00:13:39.832 clat (nsec): min=35226, max=89376, avg=51467.19, stdev=3620.25 00:13:39.832 lat (usec): min=51, max=140, avg=60.45, stdev= 3.84 00:13:39.832 clat percentiles (nsec): 00:13:39.832 | 1.00th=[44800], 5.00th=[45824], 10.00th=[46848], 20.00th=[48384], 00:13:39.833 | 30.00th=[49408], 40.00th=[50432], 50.00th=[51456], 60.00th=[52480], 00:13:39.833 | 70.00th=[53504], 80.00th=[54528], 90.00th=[56064], 95.00th=[57600], 00:13:39.833 | 99.00th=[60160], 99.50th=[61696], 99.90th=[66048], 99.95th=[68096], 00:13:39.833 | 99.99th=[89600] 00:13:39.833 bw ( KiB/s): min=32768, max=32768, per=100.00%, avg=32768.00, stdev= 0.00, samples=1 00:13:39.833 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:13:39.833 lat (usec) : 50=25.68%, 100=74.31%, 250=0.01% 00:13:39.833 cpu : usr=8.50%, sys=11.80%, ctx=15990, majf=0, minf=2 00:13:39.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.833 issued rwts: total=7798,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.833 00:13:39.833 Run status group 0 (all jobs): 00:13:39.833 READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=30.5MiB (31.9MB), run=1001-1001msec 00:13:39.833 WRITE: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:13:39.833 00:13:39.833 Disk stats (read/write): 00:13:39.833 nvme0n1: ios=7218/7192, merge=0/0, ticks=353/328, in_queue=681, util=90.58% 00:13:39.833 04:04:54 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:41.740 04:04:56 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.740 04:04:56 -- common/autotest_common.sh@1205 -- # local i=0 00:13:41.740 04:04:56 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:41.740 04:04:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.001 04:04:56 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:42.001 04:04:56 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.001 04:04:56 -- common/autotest_common.sh@1217 -- # return 0 00:13:42.001 04:04:56 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:42.001 04:04:56 -- target/nmic.sh@53 -- # nvmftestfini 00:13:42.001 04:04:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:42.001 04:04:56 -- nvmf/common.sh@117 -- # sync 00:13:42.001 04:04:56 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:42.001 04:04:56 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:42.001 04:04:56 -- nvmf/common.sh@120 -- # set +e 00:13:42.001 04:04:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.001 04:04:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:42.001 rmmod nvme_rdma 00:13:42.001 rmmod nvme_fabrics 00:13:42.001 04:04:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.001 04:04:56 -- nvmf/common.sh@124 -- # set -e 00:13:42.001 04:04:56 -- nvmf/common.sh@125 -- # return 0 00:13:42.001 04:04:56 -- nvmf/common.sh@478 -- # '[' -n 274546 ']' 00:13:42.001 04:04:56 -- nvmf/common.sh@479 -- # killprocess 274546 00:13:42.001 04:04:56 -- common/autotest_common.sh@936 -- # '[' -z 274546 ']' 00:13:42.001 04:04:56 -- common/autotest_common.sh@940 -- # kill -0 274546 00:13:42.001 04:04:56 -- common/autotest_common.sh@941 -- # uname 00:13:42.001 04:04:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:42.001 04:04:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 274546 00:13:42.001 04:04:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:42.001 04:04:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:42.001 04:04:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 274546' 00:13:42.001 killing process with pid 274546 00:13:42.001 04:04:56 -- common/autotest_common.sh@955 -- # kill 274546 00:13:42.001 04:04:56 -- common/autotest_common.sh@960 -- # wait 274546 00:13:42.262 04:04:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:42.262 04:04:56 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:42.262 00:13:42.262 real 0m14.708s 00:13:42.262 user 0m44.126s 00:13:42.262 sys 0m5.178s 00:13:42.262 04:04:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:42.262 04:04:56 -- common/autotest_common.sh@10 -- # set +x 00:13:42.262 ************************************ 00:13:42.262 END TEST nvmf_nmic 00:13:42.262 ************************************ 00:13:42.262 04:04:56 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:13:42.262 04:04:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:42.262 04:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:42.262 04:04:56 -- common/autotest_common.sh@10 -- # set +x 00:13:42.522 ************************************ 00:13:42.522 START TEST nvmf_fio_target 00:13:42.522 ************************************ 00:13:42.522 04:04:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:13:42.522 * Looking for test storage... 00:13:42.522 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:42.522 04:04:56 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.522 04:04:56 -- nvmf/common.sh@7 -- # uname -s 00:13:42.522 04:04:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.523 04:04:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.523 04:04:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.523 04:04:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.523 04:04:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.523 04:04:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.523 04:04:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.523 04:04:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.523 04:04:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.523 04:04:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.523 04:04:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:42.523 04:04:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:13:42.523 04:04:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.523 04:04:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.523 04:04:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.523 04:04:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.523 04:04:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:42.523 04:04:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.523 04:04:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.523 04:04:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.523 04:04:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.523 04:04:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.523 04:04:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.523 04:04:56 -- paths/export.sh@5 -- # export PATH 00:13:42.523 04:04:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.523 04:04:56 -- nvmf/common.sh@47 -- # : 0 00:13:42.523 04:04:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.523 04:04:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.523 04:04:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.523 04:04:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.523 04:04:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.523 04:04:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.523 04:04:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.523 04:04:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.523 04:04:56 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.523 04:04:56 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.523 04:04:56 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:42.523 04:04:56 -- target/fio.sh@16 -- # nvmftestinit 00:13:42.523 04:04:56 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:42.523 04:04:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.523 04:04:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:42.523 04:04:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:42.523 04:04:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:42.523 04:04:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.523 04:04:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.523 04:04:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.523 04:04:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:42.523 04:04:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:42.523 04:04:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:42.523 04:04:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.809 04:05:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:47.809 04:05:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.809 04:05:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.809 04:05:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.809 04:05:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.809 04:05:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.809 04:05:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.809 04:05:01 -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.809 04:05:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.809 04:05:01 -- nvmf/common.sh@296 -- # e810=() 00:13:47.809 04:05:01 -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.809 04:05:01 -- nvmf/common.sh@297 -- # x722=() 00:13:47.809 04:05:01 -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.809 04:05:01 -- nvmf/common.sh@298 -- # mlx=() 00:13:47.809 04:05:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.809 04:05:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.809 04:05:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.809 04:05:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.809 04:05:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.809 04:05:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.809 04:05:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.809 04:05:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.809 04:05:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.809 04:05:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.809 04:05:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.809 04:05:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.809 04:05:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.809 04:05:01 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:47.809 04:05:01 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:47.809 04:05:01 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:47.809 04:05:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.809 04:05:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.809 04:05:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:47.809 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:47.809 04:05:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:47.809 04:05:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.809 04:05:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:47.809 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:47.809 04:05:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:47.809 04:05:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.809 04:05:01 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:47.809 04:05:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.809 04:05:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.809 04:05:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:47.809 04:05:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.809 04:05:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:47.810 Found net devices under 0000:18:00.0: mlx_0_0 00:13:47.810 04:05:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.810 04:05:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.810 04:05:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.810 04:05:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:47.810 04:05:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.810 04:05:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:47.810 Found net devices under 0000:18:00.1: mlx_0_1 00:13:47.810 04:05:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.810 04:05:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:47.810 04:05:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:47.810 04:05:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:47.810 04:05:01 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:47.810 04:05:01 -- nvmf/common.sh@58 -- # uname 00:13:47.810 04:05:01 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:47.810 04:05:01 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:47.810 04:05:01 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:47.810 04:05:01 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:47.810 04:05:01 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:47.810 04:05:01 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:47.810 04:05:01 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:47.810 04:05:01 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:47.810 04:05:01 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:47.810 04:05:01 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:47.810 04:05:01 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:47.810 04:05:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:47.810 04:05:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:47.810 04:05:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:47.810 04:05:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:47.810 04:05:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:47.810 04:05:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:47.810 04:05:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.810 04:05:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:47.810 04:05:01 -- nvmf/common.sh@105 -- # continue 2 00:13:47.810 04:05:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:47.810 04:05:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.810 04:05:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.810 04:05:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:47.810 04:05:01 -- nvmf/common.sh@105 -- # continue 2 00:13:47.810 04:05:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:47.810 04:05:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:47.810 04:05:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:47.810 04:05:01 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:47.810 04:05:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:47.810 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:47.810 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:13:47.810 altname enp24s0f0np0 00:13:47.810 altname ens785f0np0 00:13:47.810 inet 192.168.100.8/24 scope global mlx_0_0 00:13:47.810 valid_lft forever preferred_lft forever 00:13:47.810 04:05:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:47.810 04:05:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:47.810 04:05:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:47.810 04:05:01 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:47.810 04:05:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:47.810 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:47.810 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:13:47.810 altname enp24s0f1np1 00:13:47.810 altname ens785f1np1 00:13:47.810 inet 192.168.100.9/24 scope global mlx_0_1 00:13:47.810 valid_lft forever preferred_lft forever 00:13:47.810 04:05:01 -- nvmf/common.sh@411 -- # return 0 00:13:47.810 04:05:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:47.810 04:05:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:47.810 04:05:01 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:47.810 04:05:01 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:47.810 04:05:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:47.810 04:05:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:47.810 04:05:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:47.810 04:05:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:47.810 04:05:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:47.810 04:05:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:47.810 04:05:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.810 04:05:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:47.810 04:05:01 -- nvmf/common.sh@105 -- # continue 2 00:13:47.810 04:05:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:47.810 04:05:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.810 04:05:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:47.810 04:05:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:47.810 04:05:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:47.810 04:05:01 -- nvmf/common.sh@105 -- # continue 2 00:13:47.810 04:05:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:47.810 04:05:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:47.810 04:05:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:47.810 04:05:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:47.810 04:05:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:47.810 04:05:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:47.810 04:05:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:47.810 04:05:02 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:47.810 192.168.100.9' 00:13:47.810 04:05:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:47.810 192.168.100.9' 00:13:47.810 04:05:02 -- nvmf/common.sh@446 -- # head -n 1 00:13:47.810 04:05:02 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:47.810 04:05:02 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:47.810 192.168.100.9' 00:13:47.810 04:05:02 -- nvmf/common.sh@447 -- # head -n 1 00:13:47.810 04:05:02 -- nvmf/common.sh@447 -- # tail -n +2 00:13:47.810 04:05:02 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:47.810 04:05:02 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:47.810 04:05:02 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:47.810 04:05:02 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:47.810 04:05:02 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:47.810 04:05:02 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:47.810 04:05:02 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:47.810 04:05:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:47.810 04:05:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:47.810 04:05:02 -- common/autotest_common.sh@10 -- # set +x 00:13:47.810 04:05:02 -- nvmf/common.sh@470 -- # nvmfpid=279367 00:13:47.810 04:05:02 -- nvmf/common.sh@471 -- # waitforlisten 279367 00:13:47.810 04:05:02 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.810 04:05:02 -- common/autotest_common.sh@817 -- # '[' -z 279367 ']' 00:13:47.810 04:05:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.810 04:05:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:47.810 04:05:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.810 04:05:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:47.810 04:05:02 -- common/autotest_common.sh@10 -- # set +x 00:13:47.811 [2024-04-19 04:05:02.093602] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:13:47.811 [2024-04-19 04:05:02.093653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.811 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.811 [2024-04-19 04:05:02.146330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.811 [2024-04-19 04:05:02.215488] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.811 [2024-04-19 04:05:02.215527] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.811 [2024-04-19 04:05:02.215532] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.811 [2024-04-19 04:05:02.215537] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.811 [2024-04-19 04:05:02.215542] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.811 [2024-04-19 04:05:02.215602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.811 [2024-04-19 04:05:02.215694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.811 [2024-04-19 04:05:02.215779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.811 [2024-04-19 04:05:02.215780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.381 04:05:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:48.381 04:05:02 -- common/autotest_common.sh@850 -- # return 0 00:13:48.381 04:05:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:48.381 04:05:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:48.381 04:05:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.381 04:05:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.381 04:05:02 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:48.640 [2024-04-19 04:05:03.066807] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e836c0/0x1e87bb0) succeed. 00:13:48.641 [2024-04-19 04:05:03.075912] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e84cb0/0x1ec9240) succeed. 00:13:48.900 04:05:03 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.900 04:05:03 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:48.900 04:05:03 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.160 04:05:03 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:49.160 04:05:03 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.420 04:05:03 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:49.420 04:05:03 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.420 04:05:03 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:49.420 04:05:03 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:49.679 04:05:04 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.938 04:05:04 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:49.938 04:05:04 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.197 04:05:04 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:50.197 04:05:04 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:50.198 04:05:04 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:50.198 04:05:04 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:50.457 04:05:04 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:50.457 04:05:04 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:50.457 04:05:04 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:50.716 04:05:05 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:50.716 04:05:05 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.974 04:05:05 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:50.974 [2024-04-19 04:05:05.447098] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:50.974 04:05:05 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:51.234 04:05:05 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:51.493 04:05:05 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:52.427 04:05:06 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:52.427 04:05:06 -- common/autotest_common.sh@1184 -- # local i=0 00:13:52.428 04:05:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.428 04:05:06 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:13:52.428 04:05:06 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:13:52.428 04:05:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:54.332 04:05:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:54.332 04:05:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:54.332 04:05:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.332 04:05:08 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:13:54.332 04:05:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.332 04:05:08 -- common/autotest_common.sh@1194 -- # return 0 00:13:54.332 04:05:08 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:54.332 [global] 00:13:54.332 thread=1 00:13:54.332 invalidate=1 00:13:54.332 rw=write 00:13:54.332 time_based=1 00:13:54.332 runtime=1 00:13:54.332 ioengine=libaio 00:13:54.332 direct=1 00:13:54.332 bs=4096 00:13:54.332 iodepth=1 00:13:54.332 norandommap=0 00:13:54.332 numjobs=1 00:13:54.332 00:13:54.332 verify_dump=1 00:13:54.332 verify_backlog=512 00:13:54.332 verify_state_save=0 00:13:54.332 do_verify=1 00:13:54.332 verify=crc32c-intel 00:13:54.332 [job0] 00:13:54.332 filename=/dev/nvme0n1 00:13:54.332 [job1] 00:13:54.332 filename=/dev/nvme0n2 00:13:54.332 [job2] 00:13:54.332 filename=/dev/nvme0n3 00:13:54.332 [job3] 00:13:54.332 filename=/dev/nvme0n4 00:13:54.332 Could not set queue depth (nvme0n1) 00:13:54.332 Could not set queue depth (nvme0n2) 00:13:54.332 Could not set queue depth (nvme0n3) 00:13:54.332 Could not set queue depth (nvme0n4) 00:13:54.898 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.898 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.898 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.898 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.898 fio-3.35 00:13:54.898 Starting 4 threads 00:13:55.836 00:13:55.836 job0: (groupid=0, jobs=1): err= 0: pid=280888: Fri Apr 19 04:05:10 2024 00:13:55.836 read: IOPS=3842, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1001msec) 00:13:55.836 slat (nsec): min=6160, max=29698, avg=7223.68, stdev=1016.15 00:13:55.836 clat (usec): min=62, max=395, avg=119.50, stdev=26.33 00:13:55.836 lat (usec): min=68, max=402, avg=126.73, stdev=26.54 00:13:55.836 clat percentiles (usec): 00:13:55.836 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 84], 00:13:55.836 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 131], 00:13:55.836 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 151], 00:13:55.836 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 212], 99.95th=[ 343], 00:13:55.836 | 99.99th=[ 396] 00:13:55.836 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:13:55.836 slat (nsec): min=5234, max=41489, avg=9219.12, stdev=1226.15 00:13:55.836 clat (usec): min=59, max=336, avg=111.88, stdev=25.82 00:13:55.836 lat (usec): min=67, max=346, avg=121.10, stdev=25.98 00:13:55.836 clat percentiles (usec): 00:13:55.836 | 1.00th=[ 64], 5.00th=[ 70], 10.00th=[ 73], 20.00th=[ 78], 00:13:55.836 | 30.00th=[ 113], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 123], 00:13:55.836 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 139], 95.00th=[ 145], 00:13:55.836 | 99.00th=[ 159], 99.50th=[ 172], 99.90th=[ 241], 99.95th=[ 247], 00:13:55.836 | 99.99th=[ 338] 00:13:55.836 bw ( KiB/s): min=16384, max=16384, per=26.51%, avg=16384.00, stdev= 0.00, samples=1 00:13:55.836 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:55.836 lat (usec) : 100=26.49%, 250=73.46%, 500=0.05% 00:13:55.836 cpu : usr=3.60%, sys=7.10%, ctx=7942, majf=0, minf=1 00:13:55.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.836 issued rwts: total=3846,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.836 job1: (groupid=0, jobs=1): err= 0: pid=280889: Fri Apr 19 04:05:10 2024 00:13:55.836 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:13:55.836 slat (nsec): min=6192, max=17985, avg=7389.54, stdev=864.33 00:13:55.836 clat (usec): min=61, max=366, avg=128.47, stdev=16.81 00:13:55.836 lat (usec): min=68, max=373, avg=135.86, stdev=16.94 00:13:55.836 clat percentiles (usec): 00:13:55.836 | 1.00th=[ 79], 5.00th=[ 94], 10.00th=[ 110], 20.00th=[ 124], 00:13:55.836 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 133], 00:13:55.836 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 147], 95.00th=[ 151], 00:13:55.836 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 223], 99.95th=[ 334], 00:13:55.836 | 99.99th=[ 367] 00:13:55.836 write: IOPS=3841, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1001msec); 0 zone resets 00:13:55.836 slat (nsec): min=6885, max=52962, avg=9448.83, stdev=1242.80 00:13:55.836 clat (usec): min=60, max=252, avg=119.94, stdev=16.80 00:13:55.836 lat (usec): min=69, max=262, avg=129.39, stdev=16.89 00:13:55.836 clat percentiles (usec): 00:13:55.836 | 1.00th=[ 73], 5.00th=[ 82], 10.00th=[ 98], 20.00th=[ 114], 00:13:55.836 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 124], 00:13:55.836 | 70.00th=[ 127], 80.00th=[ 131], 90.00th=[ 139], 95.00th=[ 143], 00:13:55.836 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 184], 99.95th=[ 215], 00:13:55.836 | 99.99th=[ 253] 00:13:55.836 bw ( KiB/s): min=16384, max=16384, per=26.51%, avg=16384.00, stdev= 0.00, samples=1 00:13:55.836 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:55.836 lat (usec) : 100=9.02%, 250=90.93%, 500=0.05% 00:13:55.836 cpu : usr=3.60%, sys=6.40%, ctx=7430, majf=0, minf=1 00:13:55.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.836 issued rwts: total=3584,3845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.836 job2: (groupid=0, jobs=1): err= 0: pid=280891: Fri Apr 19 04:05:10 2024 00:13:55.836 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:13:55.836 slat (nsec): min=6278, max=22592, avg=7501.45, stdev=836.58 00:13:55.836 clat (usec): min=70, max=351, avg=131.04, stdev=15.12 00:13:55.836 lat (usec): min=77, max=360, avg=138.54, stdev=15.22 00:13:55.836 clat percentiles (usec): 00:13:55.836 | 1.00th=[ 83], 5.00th=[ 97], 10.00th=[ 124], 20.00th=[ 127], 00:13:55.836 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 133], 00:13:55.836 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 153], 00:13:55.836 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 265], 99.95th=[ 306], 00:13:55.836 | 99.99th=[ 351] 00:13:55.836 write: IOPS=3681, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1001msec); 0 zone resets 00:13:55.836 slat (nsec): min=7164, max=49147, avg=9849.62, stdev=1365.97 00:13:55.836 clat (usec): min=70, max=285, avg=122.49, stdev=15.43 00:13:55.836 lat (usec): min=80, max=298, avg=132.34, stdev=15.59 00:13:55.836 clat percentiles (usec): 00:13:55.836 | 1.00th=[ 76], 5.00th=[ 86], 10.00th=[ 113], 20.00th=[ 118], 00:13:55.836 | 30.00th=[ 120], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 124], 00:13:55.836 | 70.00th=[ 127], 80.00th=[ 131], 90.00th=[ 139], 95.00th=[ 145], 00:13:55.836 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 200], 99.95th=[ 253], 00:13:55.836 | 99.99th=[ 285] 00:13:55.836 bw ( KiB/s): min=16384, max=16384, per=26.51%, avg=16384.00, stdev= 0.00, samples=1 00:13:55.836 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:55.836 lat (usec) : 100=6.08%, 250=93.84%, 500=0.08% 00:13:55.836 cpu : usr=3.70%, sys=6.50%, ctx=7269, majf=0, minf=1 00:13:55.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.836 issued rwts: total=3584,3685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.836 job3: (groupid=0, jobs=1): err= 0: pid=280892: Fri Apr 19 04:05:10 2024 00:13:55.836 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:13:55.836 slat (nsec): min=3247, max=23098, avg=7427.19, stdev=1172.59 00:13:55.836 clat (usec): min=68, max=365, avg=128.56, stdev=16.54 00:13:55.836 lat (usec): min=71, max=373, avg=135.98, stdev=16.83 00:13:55.836 clat percentiles (usec): 00:13:55.836 | 1.00th=[ 82], 5.00th=[ 92], 10.00th=[ 111], 20.00th=[ 124], 00:13:55.836 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 133], 00:13:55.836 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 151], 00:13:55.836 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 219], 99.95th=[ 322], 00:13:55.836 | 99.99th=[ 367] 00:13:55.836 write: IOPS=3834, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1001msec); 0 zone resets 00:13:55.836 slat (nsec): min=6214, max=41645, avg=9962.55, stdev=1463.79 00:13:55.836 clat (usec): min=64, max=205, avg=119.53, stdev=15.41 00:13:55.836 lat (usec): min=73, max=216, avg=129.49, stdev=15.51 00:13:55.836 clat percentiles (usec): 00:13:55.836 | 1.00th=[ 76], 5.00th=[ 88], 10.00th=[ 98], 20.00th=[ 113], 00:13:55.836 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 122], 60.00th=[ 124], 00:13:55.836 | 70.00th=[ 126], 80.00th=[ 129], 90.00th=[ 137], 95.00th=[ 143], 00:13:55.836 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 180], 99.95th=[ 194], 00:13:55.836 | 99.99th=[ 206] 00:13:55.836 bw ( KiB/s): min=16384, max=16384, per=26.51%, avg=16384.00, stdev= 0.00, samples=1 00:13:55.836 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:13:55.836 lat (usec) : 100=9.16%, 250=90.80%, 500=0.04% 00:13:55.836 cpu : usr=4.10%, sys=6.20%, ctx=7422, majf=0, minf=2 00:13:55.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.837 issued rwts: total=3584,3838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.837 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.837 00:13:55.837 Run status group 0 (all jobs): 00:13:55.837 READ: bw=57.0MiB/s (59.7MB/s), 14.0MiB/s-15.0MiB/s (14.7MB/s-15.7MB/s), io=57.0MiB (59.8MB), run=1001-1001msec 00:13:55.837 WRITE: bw=60.3MiB/s (63.3MB/s), 14.4MiB/s-16.0MiB/s (15.1MB/s-16.8MB/s), io=60.4MiB (63.3MB), run=1001-1001msec 00:13:55.837 00:13:55.837 Disk stats (read/write): 00:13:55.837 nvme0n1: ios=3122/3243, merge=0/0, ticks=409/388, in_queue=797, util=87.37% 00:13:55.837 nvme0n2: ios=3072/3391, merge=0/0, ticks=367/389, in_queue=756, util=87.44% 00:13:55.837 nvme0n3: ios=3072/3241, merge=0/0, ticks=391/376, in_queue=767, util=89.25% 00:13:55.837 nvme0n4: ios=3072/3391, merge=0/0, ticks=377/384, in_queue=761, util=89.91% 00:13:55.837 04:05:10 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:55.837 [global] 00:13:55.837 thread=1 00:13:55.837 invalidate=1 00:13:55.837 rw=randwrite 00:13:55.837 time_based=1 00:13:55.837 runtime=1 00:13:55.837 ioengine=libaio 00:13:55.837 direct=1 00:13:55.837 bs=4096 00:13:55.837 iodepth=1 00:13:55.837 norandommap=0 00:13:55.837 numjobs=1 00:13:55.837 00:13:55.837 verify_dump=1 00:13:55.837 verify_backlog=512 00:13:55.837 verify_state_save=0 00:13:55.837 do_verify=1 00:13:55.837 verify=crc32c-intel 00:13:55.837 [job0] 00:13:55.837 filename=/dev/nvme0n1 00:13:55.837 [job1] 00:13:55.837 filename=/dev/nvme0n2 00:13:55.837 [job2] 00:13:55.837 filename=/dev/nvme0n3 00:13:55.837 [job3] 00:13:55.837 filename=/dev/nvme0n4 00:13:56.113 Could not set queue depth (nvme0n1) 00:13:56.113 Could not set queue depth (nvme0n2) 00:13:56.113 Could not set queue depth (nvme0n3) 00:13:56.113 Could not set queue depth (nvme0n4) 00:13:56.375 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.375 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.375 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.375 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:56.375 fio-3.35 00:13:56.375 Starting 4 threads 00:13:57.750 00:13:57.750 job0: (groupid=0, jobs=1): err= 0: pid=281324: Fri Apr 19 04:05:11 2024 00:13:57.750 read: IOPS=3367, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:13:57.750 slat (nsec): min=6107, max=27352, avg=7497.56, stdev=1078.09 00:13:57.750 clat (usec): min=63, max=390, avg=136.93, stdev=19.70 00:13:57.750 lat (usec): min=70, max=397, avg=144.42, stdev=19.72 00:13:57.750 clat percentiles (usec): 00:13:57.750 | 1.00th=[ 88], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 129], 00:13:57.750 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 133], 60.00th=[ 135], 00:13:57.750 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 159], 95.00th=[ 167], 00:13:57.750 | 99.00th=[ 202], 99.50th=[ 247], 99.90th=[ 343], 99.95th=[ 371], 00:13:57.750 | 99.99th=[ 392] 00:13:57.750 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:13:57.750 slat (nsec): min=7919, max=80011, avg=9256.72, stdev=1654.52 00:13:57.750 clat (usec): min=61, max=669, avg=129.95, stdev=22.83 00:13:57.750 lat (usec): min=70, max=678, avg=139.21, stdev=22.90 00:13:57.750 clat percentiles (usec): 00:13:57.750 | 1.00th=[ 75], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 122], 00:13:57.750 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 128], 00:13:57.750 | 70.00th=[ 131], 80.00th=[ 139], 90.00th=[ 151], 95.00th=[ 157], 00:13:57.750 | 99.00th=[ 208], 99.50th=[ 258], 99.90th=[ 396], 99.95th=[ 404], 00:13:57.750 | 99.99th=[ 668] 00:13:57.750 bw ( KiB/s): min=14912, max=14912, per=25.13%, avg=14912.00, stdev= 0.00, samples=1 00:13:57.750 iops : min= 3728, max= 3728, avg=3728.00, stdev= 0.00, samples=1 00:13:57.750 lat (usec) : 100=2.20%, 250=97.28%, 500=0.50%, 750=0.01% 00:13:57.750 cpu : usr=2.60%, sys=6.80%, ctx=6956, majf=0, minf=1 00:13:57.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.750 issued rwts: total=3371,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.750 job1: (groupid=0, jobs=1): err= 0: pid=281333: Fri Apr 19 04:05:11 2024 00:13:57.750 read: IOPS=3884, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1001msec) 00:13:57.750 slat (nsec): min=5953, max=19149, avg=7291.52, stdev=899.34 00:13:57.750 clat (usec): min=50, max=564, avg=118.95, stdev=31.34 00:13:57.750 lat (usec): min=56, max=571, avg=126.24, stdev=31.63 00:13:57.750 clat percentiles (usec): 00:13:57.750 | 1.00th=[ 63], 5.00th=[ 68], 10.00th=[ 71], 20.00th=[ 77], 00:13:57.750 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 133], 00:13:57.750 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 139], 95.00th=[ 143], 00:13:57.750 | 99.00th=[ 169], 99.50th=[ 200], 99.90th=[ 388], 99.95th=[ 537], 00:13:57.750 | 99.99th=[ 562] 00:13:57.750 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:13:57.750 slat (nsec): min=7724, max=35881, avg=9073.60, stdev=1149.63 00:13:57.750 clat (usec): min=47, max=396, avg=111.22, stdev=29.82 00:13:57.750 lat (usec): min=56, max=405, avg=120.29, stdev=30.02 00:13:57.750 clat percentiles (usec): 00:13:57.750 | 1.00th=[ 60], 5.00th=[ 64], 10.00th=[ 67], 20.00th=[ 72], 00:13:57.750 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:13:57.750 | 70.00th=[ 126], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 141], 00:13:57.750 | 99.00th=[ 169], 99.50th=[ 219], 99.90th=[ 281], 99.95th=[ 326], 00:13:57.750 | 99.99th=[ 396] 00:13:57.750 bw ( KiB/s): min=18960, max=18960, per=31.96%, avg=18960.00, stdev= 0.00, samples=1 00:13:57.750 iops : min= 4740, max= 4740, avg=4740.00, stdev= 0.00, samples=1 00:13:57.750 lat (usec) : 50=0.06%, 100=25.78%, 250=73.89%, 500=0.25%, 750=0.03% 00:13:57.750 cpu : usr=2.60%, sys=8.10%, ctx=7984, majf=0, minf=1 00:13:57.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.750 issued rwts: total=3888,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.750 job2: (groupid=0, jobs=1): err= 0: pid=281346: Fri Apr 19 04:05:11 2024 00:13:57.750 read: IOPS=3354, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec) 00:13:57.750 slat (nsec): min=6331, max=25563, avg=7621.42, stdev=973.42 00:13:57.750 clat (usec): min=70, max=646, avg=137.04, stdev=21.32 00:13:57.750 lat (usec): min=77, max=653, avg=144.66, stdev=21.36 00:13:57.750 clat percentiles (usec): 00:13:57.750 | 1.00th=[ 93], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 129], 00:13:57.750 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 133], 60.00th=[ 135], 00:13:57.750 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 159], 95.00th=[ 167], 00:13:57.750 | 99.00th=[ 206], 99.50th=[ 251], 99.90th=[ 367], 99.95th=[ 445], 00:13:57.750 | 99.99th=[ 644] 00:13:57.750 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:13:57.750 slat (nsec): min=8252, max=44381, avg=9558.77, stdev=1157.05 00:13:57.750 clat (usec): min=66, max=403, avg=129.79, stdev=21.14 00:13:57.750 lat (usec): min=75, max=413, avg=139.35, stdev=21.23 00:13:57.750 clat percentiles (usec): 00:13:57.750 | 1.00th=[ 77], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 121], 00:13:57.750 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 127], 00:13:57.750 | 70.00th=[ 130], 80.00th=[ 139], 90.00th=[ 151], 95.00th=[ 159], 00:13:57.750 | 99.00th=[ 208], 99.50th=[ 253], 99.90th=[ 306], 99.95th=[ 400], 00:13:57.750 | 99.99th=[ 404] 00:13:57.750 bw ( KiB/s): min=14784, max=14784, per=24.92%, avg=14784.00, stdev= 0.00, samples=1 00:13:57.750 iops : min= 3696, max= 3696, avg=3696.00, stdev= 0.00, samples=1 00:13:57.750 lat (usec) : 100=1.90%, 250=97.58%, 500=0.50%, 750=0.01% 00:13:57.750 cpu : usr=4.00%, sys=5.70%, ctx=6942, majf=0, minf=1 00:13:57.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.750 issued rwts: total=3358,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.750 job3: (groupid=0, jobs=1): err= 0: pid=281351: Fri Apr 19 04:05:11 2024 00:13:57.750 read: IOPS=3357, BW=13.1MiB/s (13.8MB/s)(13.1MiB/1001msec) 00:13:57.750 slat (nsec): min=6112, max=28583, avg=7654.39, stdev=1083.90 00:13:57.750 clat (usec): min=61, max=399, avg=136.99, stdev=18.71 00:13:57.750 lat (usec): min=68, max=407, avg=144.65, stdev=18.74 00:13:57.750 clat percentiles (usec): 00:13:57.750 | 1.00th=[ 95], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 129], 00:13:57.750 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 133], 60.00th=[ 135], 00:13:57.750 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 159], 95.00th=[ 167], 00:13:57.750 | 99.00th=[ 196], 99.50th=[ 227], 99.90th=[ 363], 99.95th=[ 375], 00:13:57.750 | 99.99th=[ 400] 00:13:57.750 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:13:57.750 slat (nsec): min=8325, max=40516, avg=9592.40, stdev=1142.17 00:13:57.750 clat (usec): min=62, max=558, avg=129.73, stdev=22.22 00:13:57.750 lat (usec): min=72, max=569, avg=139.32, stdev=22.28 00:13:57.750 clat percentiles (usec): 00:13:57.750 | 1.00th=[ 78], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 121], 00:13:57.750 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 127], 00:13:57.751 | 70.00th=[ 130], 80.00th=[ 139], 90.00th=[ 151], 95.00th=[ 159], 00:13:57.751 | 99.00th=[ 206], 99.50th=[ 255], 99.90th=[ 367], 99.95th=[ 445], 00:13:57.751 | 99.99th=[ 562] 00:13:57.751 bw ( KiB/s): min=14880, max=14880, per=25.08%, avg=14880.00, stdev= 0.00, samples=1 00:13:57.751 iops : min= 3720, max= 3720, avg=3720.00, stdev= 0.00, samples=1 00:13:57.751 lat (usec) : 100=1.97%, 250=97.60%, 500=0.42%, 750=0.01% 00:13:57.751 cpu : usr=2.70%, sys=6.80%, ctx=6945, majf=0, minf=2 00:13:57.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.751 issued rwts: total=3361,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.751 00:13:57.751 Run status group 0 (all jobs): 00:13:57.751 READ: bw=54.5MiB/s (57.2MB/s), 13.1MiB/s-15.2MiB/s (13.7MB/s-15.9MB/s), io=54.6MiB (57.3MB), run=1001-1001msec 00:13:57.751 WRITE: bw=57.9MiB/s (60.8MB/s), 14.0MiB/s-16.0MiB/s (14.7MB/s-16.8MB/s), io=58.0MiB (60.8MB), run=1001-1001msec 00:13:57.751 00:13:57.751 Disk stats (read/write): 00:13:57.751 nvme0n1: ios=2908/3072, merge=0/0, ticks=397/380, in_queue=777, util=86.67% 00:13:57.751 nvme0n2: ios=3373/3584, merge=0/0, ticks=389/369, in_queue=758, util=87.26% 00:13:57.751 nvme0n3: ios=2843/3072, merge=0/0, ticks=385/383, in_queue=768, util=89.18% 00:13:57.751 nvme0n4: ios=2849/3072, merge=0/0, ticks=383/396, in_queue=779, util=89.63% 00:13:57.751 04:05:11 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:57.751 [global] 00:13:57.751 thread=1 00:13:57.751 invalidate=1 00:13:57.751 rw=write 00:13:57.751 time_based=1 00:13:57.751 runtime=1 00:13:57.751 ioengine=libaio 00:13:57.751 direct=1 00:13:57.751 bs=4096 00:13:57.751 iodepth=128 00:13:57.751 norandommap=0 00:13:57.751 numjobs=1 00:13:57.751 00:13:57.751 verify_dump=1 00:13:57.751 verify_backlog=512 00:13:57.751 verify_state_save=0 00:13:57.751 do_verify=1 00:13:57.751 verify=crc32c-intel 00:13:57.751 [job0] 00:13:57.751 filename=/dev/nvme0n1 00:13:57.751 [job1] 00:13:57.751 filename=/dev/nvme0n2 00:13:57.751 [job2] 00:13:57.751 filename=/dev/nvme0n3 00:13:57.751 [job3] 00:13:57.751 filename=/dev/nvme0n4 00:13:57.751 Could not set queue depth (nvme0n1) 00:13:57.751 Could not set queue depth (nvme0n2) 00:13:57.751 Could not set queue depth (nvme0n3) 00:13:57.751 Could not set queue depth (nvme0n4) 00:13:57.751 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.751 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.751 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.751 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.751 fio-3.35 00:13:57.751 Starting 4 threads 00:13:59.126 00:13:59.126 job0: (groupid=0, jobs=1): err= 0: pid=281779: Fri Apr 19 04:05:13 2024 00:13:59.126 read: IOPS=7158, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1002msec) 00:13:59.126 slat (nsec): min=1191, max=5693.8k, avg=68067.36, stdev=336141.77 00:13:59.126 clat (usec): min=807, max=20809, avg=9008.44, stdev=3667.73 00:13:59.126 lat (usec): min=1261, max=20815, avg=9076.51, stdev=3686.62 00:13:59.126 clat percentiles (usec): 00:13:59.126 | 1.00th=[ 3458], 5.00th=[ 4424], 10.00th=[ 5014], 20.00th=[ 6128], 00:13:59.126 | 30.00th=[ 6456], 40.00th=[ 6915], 50.00th=[ 7963], 60.00th=[ 9372], 00:13:59.126 | 70.00th=[10683], 80.00th=[12125], 90.00th=[14091], 95.00th=[16581], 00:13:59.126 | 99.00th=[19006], 99.50th=[20055], 99.90th=[20841], 99.95th=[20841], 00:13:59.126 | 99.99th=[20841] 00:13:59.126 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:13:59.126 slat (nsec): min=1734, max=6793.0k, avg=63930.27, stdev=321936.53 00:13:59.126 clat (usec): min=1272, max=25589, avg=8116.49, stdev=3571.62 00:13:59.126 lat (usec): min=1275, max=30578, avg=8180.42, stdev=3595.55 00:13:59.126 clat percentiles (usec): 00:13:59.126 | 1.00th=[ 3359], 5.00th=[ 4113], 10.00th=[ 4752], 20.00th=[ 5473], 00:13:59.126 | 30.00th=[ 5997], 40.00th=[ 6390], 50.00th=[ 6783], 60.00th=[ 7701], 00:13:59.126 | 70.00th=[ 9110], 80.00th=[10945], 90.00th=[12911], 95.00th=[15139], 00:13:59.126 | 99.00th=[20579], 99.50th=[22414], 99.90th=[23725], 99.95th=[25035], 00:13:59.126 | 99.99th=[25560] 00:13:59.126 bw ( KiB/s): min=30040, max=30040, per=29.06%, avg=30040.00, stdev= 0.00, samples=1 00:13:59.126 iops : min= 7510, max= 7510, avg=7510.00, stdev= 0.00, samples=1 00:13:59.126 lat (usec) : 1000=0.01% 00:13:59.126 lat (msec) : 2=0.17%, 4=3.03%, 10=66.54%, 20=29.37%, 50=0.89% 00:13:59.126 cpu : usr=3.80%, sys=3.30%, ctx=1637, majf=0, minf=1 00:13:59.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:59.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.126 issued rwts: total=7173,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.126 job1: (groupid=0, jobs=1): err= 0: pid=281791: Fri Apr 19 04:05:13 2024 00:13:59.126 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:13:59.126 slat (nsec): min=1212, max=5079.7k, avg=84378.21, stdev=391786.34 00:13:59.126 clat (usec): min=2871, max=29334, avg=10824.79, stdev=4994.87 00:13:59.126 lat (usec): min=2872, max=29352, avg=10909.17, stdev=5025.74 00:13:59.126 clat percentiles (usec): 00:13:59.126 | 1.00th=[ 3490], 5.00th=[ 4686], 10.00th=[ 5342], 20.00th=[ 6390], 00:13:59.126 | 30.00th=[ 7308], 40.00th=[ 8225], 50.00th=[ 9634], 60.00th=[11207], 00:13:59.126 | 70.00th=[12911], 80.00th=[15401], 90.00th=[18482], 95.00th=[19530], 00:13:59.126 | 99.00th=[23725], 99.50th=[26346], 99.90th=[29230], 99.95th=[29230], 00:13:59.126 | 99.99th=[29230] 00:13:59.126 write: IOPS=5935, BW=23.2MiB/s (24.3MB/s)(23.3MiB/1003msec); 0 zone resets 00:13:59.126 slat (nsec): min=1723, max=5429.1k, avg=85286.86, stdev=373180.99 00:13:59.126 clat (usec): min=1821, max=26508, avg=11091.00, stdev=5061.17 00:13:59.126 lat (usec): min=2221, max=26510, avg=11176.28, stdev=5086.89 00:13:59.126 clat percentiles (usec): 00:13:59.126 | 1.00th=[ 3589], 5.00th=[ 4686], 10.00th=[ 5669], 20.00th=[ 6521], 00:13:59.126 | 30.00th=[ 7373], 40.00th=[ 8455], 50.00th=[ 9503], 60.00th=[11863], 00:13:59.126 | 70.00th=[13829], 80.00th=[15926], 90.00th=[19268], 95.00th=[20317], 00:13:59.126 | 99.00th=[23200], 99.50th=[23987], 99.90th=[26608], 99.95th=[26608], 00:13:59.126 | 99.99th=[26608] 00:13:59.126 bw ( KiB/s): min=22032, max=24526, per=22.52%, avg=23279.00, stdev=1763.52, samples=2 00:13:59.126 iops : min= 5508, max= 6131, avg=5819.50, stdev=440.53, samples=2 00:13:59.126 lat (msec) : 2=0.01%, 4=1.78%, 10=50.63%, 20=42.17%, 50=5.41% 00:13:59.126 cpu : usr=3.09%, sys=3.09%, ctx=1488, majf=0, minf=1 00:13:59.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:59.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.126 issued rwts: total=5632,5953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.127 job2: (groupid=0, jobs=1): err= 0: pid=281806: Fri Apr 19 04:05:13 2024 00:13:59.127 read: IOPS=5663, BW=22.1MiB/s (23.2MB/s)(22.2MiB/1003msec) 00:13:59.127 slat (nsec): min=1247, max=6083.2k, avg=87068.39, stdev=386201.52 00:13:59.127 clat (usec): min=890, max=23664, avg=11258.45, stdev=4511.32 00:13:59.127 lat (usec): min=2563, max=23666, avg=11345.51, stdev=4534.24 00:13:59.127 clat percentiles (usec): 00:13:59.127 | 1.00th=[ 3490], 5.00th=[ 4883], 10.00th=[ 5669], 20.00th=[ 6718], 00:13:59.127 | 30.00th=[ 8029], 40.00th=[ 9241], 50.00th=[10814], 60.00th=[12649], 00:13:59.127 | 70.00th=[14091], 80.00th=[15270], 90.00th=[16909], 95.00th=[19530], 00:13:59.127 | 99.00th=[22676], 99.50th=[23200], 99.90th=[23200], 99.95th=[23200], 00:13:59.127 | 99.99th=[23725] 00:13:59.127 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:13:59.127 slat (nsec): min=1776, max=4778.9k, avg=77403.89, stdev=351783.06 00:13:59.127 clat (usec): min=2354, max=24770, avg=10275.03, stdev=4090.95 00:13:59.127 lat (usec): min=3119, max=24777, avg=10352.44, stdev=4113.02 00:13:59.127 clat percentiles (usec): 00:13:59.127 | 1.00th=[ 3818], 5.00th=[ 5014], 10.00th=[ 5800], 20.00th=[ 6980], 00:13:59.127 | 30.00th=[ 7635], 40.00th=[ 8291], 50.00th=[ 9241], 60.00th=[10421], 00:13:59.127 | 70.00th=[11863], 80.00th=[13960], 90.00th=[16057], 95.00th=[18220], 00:13:59.127 | 99.00th=[21890], 99.50th=[22152], 99.90th=[24773], 99.95th=[24773], 00:13:59.127 | 99.99th=[24773] 00:13:59.127 bw ( KiB/s): min=23944, max=24526, per=23.44%, avg=24235.00, stdev=411.54, samples=2 00:13:59.127 iops : min= 5986, max= 6131, avg=6058.50, stdev=102.53, samples=2 00:13:59.127 lat (usec) : 1000=0.01% 00:13:59.127 lat (msec) : 4=1.27%, 10=49.97%, 20=45.70%, 50=3.05% 00:13:59.127 cpu : usr=2.59%, sys=3.49%, ctx=1367, majf=0, minf=1 00:13:59.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:59.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.127 issued rwts: total=5680,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.127 job3: (groupid=0, jobs=1): err= 0: pid=281811: Fri Apr 19 04:05:13 2024 00:13:59.127 read: IOPS=6081, BW=23.8MiB/s (24.9MB/s)(23.8MiB/1002msec) 00:13:59.127 slat (nsec): min=1261, max=5998.5k, avg=82689.37, stdev=416972.54 00:13:59.127 clat (usec): min=357, max=21620, avg=10630.07, stdev=3932.44 00:13:59.127 lat (usec): min=1942, max=21621, avg=10712.76, stdev=3953.88 00:13:59.127 clat percentiles (usec): 00:13:59.127 | 1.00th=[ 3130], 5.00th=[ 5014], 10.00th=[ 5866], 20.00th=[ 7439], 00:13:59.127 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[10814], 00:13:59.127 | 70.00th=[12125], 80.00th=[13698], 90.00th=[16319], 95.00th=[18482], 00:13:59.127 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21627], 99.95th=[21627], 00:13:59.127 | 99.99th=[21627] 00:13:59.127 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:13:59.127 slat (nsec): min=1778, max=5085.9k, avg=77205.66, stdev=368433.74 00:13:59.127 clat (usec): min=2937, max=20070, avg=10093.23, stdev=3621.95 00:13:59.127 lat (usec): min=3206, max=24180, avg=10170.44, stdev=3644.18 00:13:59.127 clat percentiles (usec): 00:13:59.127 | 1.00th=[ 3720], 5.00th=[ 4621], 10.00th=[ 5407], 20.00th=[ 6718], 00:13:59.127 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9765], 60.00th=[10683], 00:13:59.127 | 70.00th=[11863], 80.00th=[13829], 90.00th=[15139], 95.00th=[15926], 00:13:59.127 | 99.00th=[19268], 99.50th=[19268], 99.90th=[20055], 99.95th=[20055], 00:13:59.127 | 99.99th=[20055] 00:13:59.127 bw ( KiB/s): min=21640, max=27512, per=23.77%, avg=24576.00, stdev=4152.13, samples=2 00:13:59.127 iops : min= 5410, max= 6878, avg=6144.00, stdev=1038.03, samples=2 00:13:59.127 lat (usec) : 500=0.01% 00:13:59.127 lat (msec) : 2=0.15%, 4=1.47%, 10=49.58%, 20=47.55%, 50=1.24% 00:13:59.127 cpu : usr=3.30%, sys=3.50%, ctx=1414, majf=0, minf=1 00:13:59.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:59.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.127 issued rwts: total=6094,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.127 00:13:59.127 Run status group 0 (all jobs): 00:13:59.127 READ: bw=95.7MiB/s (100MB/s), 21.9MiB/s-28.0MiB/s (23.0MB/s-29.3MB/s), io=96.0MiB (101MB), run=1002-1003msec 00:13:59.127 WRITE: bw=101MiB/s (106MB/s), 23.2MiB/s-29.9MiB/s (24.3MB/s-31.4MB/s), io=101MiB (106MB), run=1002-1003msec 00:13:59.127 00:13:59.127 Disk stats (read/write): 00:13:59.127 nvme0n1: ios=6194/6280, merge=0/0, ticks=19271/18514, in_queue=37785, util=86.47% 00:13:59.127 nvme0n2: ios=5125/5586, merge=0/0, ticks=15832/16442, in_queue=32274, util=86.54% 00:13:59.127 nvme0n3: ios=5036/5120, merge=0/0, ticks=19192/18329, in_queue=37521, util=88.85% 00:13:59.127 nvme0n4: ios=5152/5632, merge=0/0, ticks=21478/22246, in_queue=43724, util=89.41% 00:13:59.127 04:05:13 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:59.127 [global] 00:13:59.127 thread=1 00:13:59.127 invalidate=1 00:13:59.127 rw=randwrite 00:13:59.127 time_based=1 00:13:59.127 runtime=1 00:13:59.127 ioengine=libaio 00:13:59.127 direct=1 00:13:59.127 bs=4096 00:13:59.127 iodepth=128 00:13:59.127 norandommap=0 00:13:59.127 numjobs=1 00:13:59.127 00:13:59.127 verify_dump=1 00:13:59.127 verify_backlog=512 00:13:59.127 verify_state_save=0 00:13:59.127 do_verify=1 00:13:59.127 verify=crc32c-intel 00:13:59.127 [job0] 00:13:59.127 filename=/dev/nvme0n1 00:13:59.127 [job1] 00:13:59.127 filename=/dev/nvme0n2 00:13:59.127 [job2] 00:13:59.127 filename=/dev/nvme0n3 00:13:59.127 [job3] 00:13:59.127 filename=/dev/nvme0n4 00:13:59.127 Could not set queue depth (nvme0n1) 00:13:59.127 Could not set queue depth (nvme0n2) 00:13:59.127 Could not set queue depth (nvme0n3) 00:13:59.127 Could not set queue depth (nvme0n4) 00:13:59.385 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.385 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.385 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.385 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:59.385 fio-3.35 00:13:59.385 Starting 4 threads 00:14:00.760 00:14:00.760 job0: (groupid=0, jobs=1): err= 0: pid=282283: Fri Apr 19 04:05:14 2024 00:14:00.760 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:14:00.760 slat (nsec): min=1179, max=5135.6k, avg=69275.86, stdev=343943.15 00:14:00.760 clat (usec): min=3845, max=22199, avg=9338.35, stdev=3348.64 00:14:00.760 lat (usec): min=3922, max=22202, avg=9407.63, stdev=3367.95 00:14:00.760 clat percentiles (usec): 00:14:00.760 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6456], 00:14:00.760 | 30.00th=[ 6849], 40.00th=[ 7701], 50.00th=[ 8455], 60.00th=[ 9372], 00:14:00.760 | 70.00th=[10814], 80.00th=[11731], 90.00th=[14091], 95.00th=[16188], 00:14:00.760 | 99.00th=[19530], 99.50th=[19792], 99.90th=[21103], 99.95th=[21103], 00:14:00.760 | 99.99th=[22152] 00:14:00.760 write: IOPS=7027, BW=27.5MiB/s (28.8MB/s)(27.5MiB/1002msec); 0 zone resets 00:14:00.760 slat (nsec): min=1718, max=5504.3k, avg=73002.27, stdev=334183.14 00:14:00.760 clat (usec): min=680, max=18183, avg=9157.44, stdev=2849.59 00:14:00.760 lat (usec): min=3294, max=20379, avg=9230.44, stdev=2862.46 00:14:00.760 clat percentiles (usec): 00:14:00.760 | 1.00th=[ 4178], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6259], 00:14:00.760 | 30.00th=[ 6718], 40.00th=[ 7832], 50.00th=[ 9241], 60.00th=[10290], 00:14:00.760 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12387], 95.00th=[13960], 00:14:00.760 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:14:00.760 | 99.99th=[18220] 00:14:00.760 bw ( KiB/s): min=22552, max=32768, per=26.69%, avg=27660.00, stdev=7223.80, samples=2 00:14:00.760 iops : min= 5638, max= 8192, avg=6915.00, stdev=1805.95, samples=2 00:14:00.760 lat (usec) : 750=0.01% 00:14:00.760 lat (msec) : 4=0.51%, 10=60.29%, 20=38.97%, 50=0.23% 00:14:00.760 cpu : usr=2.90%, sys=5.29%, ctx=1775, majf=0, minf=1 00:14:00.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:00.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.760 issued rwts: total=6656,7042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.760 job1: (groupid=0, jobs=1): err= 0: pid=282294: Fri Apr 19 04:05:14 2024 00:14:00.760 read: IOPS=6199, BW=24.2MiB/s (25.4MB/s)(24.3MiB/1002msec) 00:14:00.760 slat (nsec): min=1209, max=4526.6k, avg=79764.19, stdev=332836.47 00:14:00.760 clat (usec): min=1011, max=20413, avg=10394.30, stdev=3777.85 00:14:00.760 lat (usec): min=2051, max=20450, avg=10474.06, stdev=3796.89 00:14:00.760 clat percentiles (usec): 00:14:00.760 | 1.00th=[ 3425], 5.00th=[ 5080], 10.00th=[ 5932], 20.00th=[ 6783], 00:14:00.760 | 30.00th=[ 7767], 40.00th=[ 9110], 50.00th=[10290], 60.00th=[11338], 00:14:00.760 | 70.00th=[12125], 80.00th=[13829], 90.00th=[16057], 95.00th=[17171], 00:14:00.760 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:14:00.760 | 99.99th=[20317] 00:14:00.760 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:14:00.760 slat (nsec): min=1728, max=3696.6k, avg=72307.07, stdev=311763.41 00:14:00.760 clat (usec): min=3534, max=19140, avg=9361.48, stdev=3690.80 00:14:00.760 lat (usec): min=3550, max=19143, avg=9433.78, stdev=3711.32 00:14:00.760 clat percentiles (usec): 00:14:00.760 | 1.00th=[ 4424], 5.00th=[ 4817], 10.00th=[ 5211], 20.00th=[ 5866], 00:14:00.760 | 30.00th=[ 6456], 40.00th=[ 7439], 50.00th=[ 8586], 60.00th=[10421], 00:14:00.760 | 70.00th=[11076], 80.00th=[12649], 90.00th=[14746], 95.00th=[16712], 00:14:00.760 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18744], 99.95th=[19268], 00:14:00.760 | 99.99th=[19268] 00:14:00.760 bw ( KiB/s): min=24576, max=28200, per=25.46%, avg=26388.00, stdev=2562.55, samples=2 00:14:00.760 iops : min= 6144, max= 7050, avg=6597.00, stdev=640.64, samples=2 00:14:00.760 lat (msec) : 2=0.01%, 4=0.89%, 10=52.34%, 20=46.73%, 50=0.03% 00:14:00.760 cpu : usr=3.80%, sys=3.80%, ctx=1625, majf=0, minf=1 00:14:00.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:00.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.760 issued rwts: total=6212,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.760 job2: (groupid=0, jobs=1): err= 0: pid=282310: Fri Apr 19 04:05:14 2024 00:14:00.760 read: IOPS=5321, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1003msec) 00:14:00.760 slat (nsec): min=1224, max=5539.7k, avg=91376.79, stdev=448984.50 00:14:00.760 clat (usec): min=1404, max=21381, avg=11616.92, stdev=3402.33 00:14:00.760 lat (usec): min=3044, max=21388, avg=11708.30, stdev=3417.60 00:14:00.760 clat percentiles (usec): 00:14:00.760 | 1.00th=[ 5145], 5.00th=[ 6652], 10.00th=[ 7570], 20.00th=[ 8455], 00:14:00.760 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[11207], 60.00th=[12518], 00:14:00.760 | 70.00th=[14091], 80.00th=[14877], 90.00th=[15795], 95.00th=[17433], 00:14:00.760 | 99.00th=[19792], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:14:00.760 | 99.99th=[21365] 00:14:00.760 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:14:00.760 slat (nsec): min=1809, max=5062.2k, avg=87556.08, stdev=410691.57 00:14:00.760 clat (usec): min=4249, max=21363, avg=11493.12, stdev=3242.21 00:14:00.760 lat (usec): min=4253, max=21367, avg=11580.68, stdev=3249.90 00:14:00.760 clat percentiles (usec): 00:14:00.760 | 1.00th=[ 5211], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 8160], 00:14:00.760 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[11600], 60.00th=[12649], 00:14:00.760 | 70.00th=[13304], 80.00th=[14222], 90.00th=[15795], 95.00th=[16909], 00:14:00.760 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20841], 99.95th=[20841], 00:14:00.760 | 99.99th=[21365] 00:14:00.760 bw ( KiB/s): min=20480, max=24576, per=21.74%, avg=22528.00, stdev=2896.31, samples=2 00:14:00.760 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:14:00.760 lat (msec) : 2=0.01%, 4=0.06%, 10=36.69%, 20=62.55%, 50=0.68% 00:14:00.760 cpu : usr=2.69%, sys=3.89%, ctx=1179, majf=0, minf=1 00:14:00.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:00.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.760 issued rwts: total=5337,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.760 job3: (groupid=0, jobs=1): err= 0: pid=282315: Fri Apr 19 04:05:14 2024 00:14:00.760 read: IOPS=6495, BW=25.4MiB/s (26.6MB/s)(25.4MiB/1003msec) 00:14:00.760 slat (nsec): min=1197, max=6307.5k, avg=75428.15, stdev=334257.72 00:14:00.760 clat (usec): min=1417, max=20379, avg=9632.13, stdev=3000.78 00:14:00.760 lat (usec): min=2932, max=20380, avg=9707.56, stdev=3016.12 00:14:00.760 clat percentiles (usec): 00:14:00.760 | 1.00th=[ 4490], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7177], 00:14:00.760 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8848], 60.00th=[ 9896], 00:14:00.760 | 70.00th=[10945], 80.00th=[12518], 90.00th=[13960], 95.00th=[15008], 00:14:00.760 | 99.00th=[18220], 99.50th=[19792], 99.90th=[19792], 99.95th=[20317], 00:14:00.760 | 99.99th=[20317] 00:14:00.760 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:14:00.760 slat (nsec): min=1702, max=4549.6k, avg=72863.07, stdev=318400.28 00:14:00.760 clat (usec): min=2897, max=22803, avg=9617.16, stdev=3403.02 00:14:00.760 lat (usec): min=2904, max=22805, avg=9690.03, stdev=3421.12 00:14:00.760 clat percentiles (usec): 00:14:00.760 | 1.00th=[ 4359], 5.00th=[ 5211], 10.00th=[ 5997], 20.00th=[ 6849], 00:14:00.760 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8979], 60.00th=[ 9896], 00:14:00.760 | 70.00th=[10945], 80.00th=[12518], 90.00th=[13829], 95.00th=[15008], 00:14:00.760 | 99.00th=[21627], 99.50th=[22152], 99.90th=[22676], 99.95th=[22676], 00:14:00.760 | 99.99th=[22676] 00:14:00.760 bw ( KiB/s): min=24576, max=28672, per=25.69%, avg=26624.00, stdev=2896.31, samples=2 00:14:00.760 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:14:00.760 lat (msec) : 2=0.01%, 4=0.52%, 10=60.44%, 20=38.06%, 50=0.96% 00:14:00.760 cpu : usr=3.49%, sys=4.49%, ctx=1467, majf=0, minf=1 00:14:00.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:00.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.760 issued rwts: total=6515,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.760 00:14:00.760 Run status group 0 (all jobs): 00:14:00.761 READ: bw=96.3MiB/s (101MB/s), 20.8MiB/s-25.9MiB/s (21.8MB/s-27.2MB/s), io=96.6MiB (101MB), run=1002-1003msec 00:14:00.761 WRITE: bw=101MiB/s (106MB/s), 21.9MiB/s-27.5MiB/s (23.0MB/s-28.8MB/s), io=102MiB (106MB), run=1002-1003msec 00:14:00.761 00:14:00.761 Disk stats (read/write): 00:14:00.761 nvme0n1: ios=5903/6144, merge=0/0, ticks=14581/14302, in_queue=28883, util=86.67% 00:14:00.761 nvme0n2: ios=5120/5500, merge=0/0, ticks=14323/13601, in_queue=27924, util=87.08% 00:14:00.761 nvme0n3: ios=4608/4657, merge=0/0, ticks=14657/14261, in_queue=28918, util=89.11% 00:14:00.761 nvme0n4: ios=5632/5732, merge=0/0, ticks=14691/14186, in_queue=28877, util=89.24% 00:14:00.761 04:05:14 -- target/fio.sh@55 -- # sync 00:14:00.761 04:05:14 -- target/fio.sh@59 -- # fio_pid=282427 00:14:00.761 04:05:14 -- target/fio.sh@61 -- # sleep 3 00:14:00.761 04:05:14 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:00.761 [global] 00:14:00.761 thread=1 00:14:00.761 invalidate=1 00:14:00.761 rw=read 00:14:00.761 time_based=1 00:14:00.761 runtime=10 00:14:00.761 ioengine=libaio 00:14:00.761 direct=1 00:14:00.761 bs=4096 00:14:00.761 iodepth=1 00:14:00.761 norandommap=1 00:14:00.761 numjobs=1 00:14:00.761 00:14:00.761 [job0] 00:14:00.761 filename=/dev/nvme0n1 00:14:00.761 [job1] 00:14:00.761 filename=/dev/nvme0n2 00:14:00.761 [job2] 00:14:00.761 filename=/dev/nvme0n3 00:14:00.761 [job3] 00:14:00.761 filename=/dev/nvme0n4 00:14:00.761 Could not set queue depth (nvme0n1) 00:14:00.761 Could not set queue depth (nvme0n2) 00:14:00.761 Could not set queue depth (nvme0n3) 00:14:00.761 Could not set queue depth (nvme0n4) 00:14:00.761 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.761 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.761 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.761 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.761 fio-3.35 00:14:00.761 Starting 4 threads 00:14:04.046 04:05:17 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:04.046 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=115445760, buflen=4096 00:14:04.047 fio: pid=282755, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:04.047 04:05:18 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:04.047 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=77246464, buflen=4096 00:14:04.047 fio: pid=282750, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:04.047 04:05:18 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.047 04:05:18 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:04.047 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=16396288, buflen=4096 00:14:04.047 fio: pid=282720, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:04.047 04:05:18 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.047 04:05:18 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:04.306 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=26193920, buflen=4096 00:14:04.306 fio: pid=282733, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:04.306 04:05:18 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.306 04:05:18 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:04.306 00:14:04.306 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=282720: Fri Apr 19 04:05:18 2024 00:14:04.306 read: IOPS=6623, BW=25.9MiB/s (27.1MB/s)(79.6MiB/3078msec) 00:14:04.306 slat (usec): min=5, max=34936, avg=10.96, stdev=307.15 00:14:04.306 clat (usec): min=45, max=354, avg=137.58, stdev=33.58 00:14:04.306 lat (usec): min=52, max=35016, avg=148.54, stdev=308.53 00:14:04.306 clat percentiles (usec): 00:14:04.306 | 1.00th=[ 55], 5.00th=[ 74], 10.00th=[ 81], 20.00th=[ 109], 00:14:04.306 | 30.00th=[ 135], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:14:04.306 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 172], 95.00th=[ 190], 00:14:04.306 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 221], 99.95th=[ 227], 00:14:04.306 | 99.99th=[ 306] 00:14:04.306 bw ( KiB/s): min=25304, max=27320, per=23.15%, avg=25803.20, stdev=854.48, samples=5 00:14:04.306 iops : min= 6326, max= 6830, avg=6450.80, stdev=213.62, samples=5 00:14:04.306 lat (usec) : 50=0.26%, 100=16.85%, 250=82.87%, 500=0.02% 00:14:04.306 cpu : usr=1.56%, sys=5.91%, ctx=20393, majf=0, minf=1 00:14:04.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.306 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.306 issued rwts: total=20388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.306 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=282733: Fri Apr 19 04:05:18 2024 00:14:04.306 read: IOPS=7037, BW=27.5MiB/s (28.8MB/s)(89.0MiB/3237msec) 00:14:04.306 slat (usec): min=2, max=16790, avg=10.99, stdev=240.79 00:14:04.306 clat (usec): min=43, max=19025, avg=129.60, stdev=131.45 00:14:04.306 lat (usec): min=46, max=19033, avg=140.59, stdev=273.82 00:14:04.306 clat percentiles (usec): 00:14:04.306 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 84], 00:14:04.306 | 30.00th=[ 112], 40.00th=[ 137], 50.00th=[ 145], 60.00th=[ 149], 00:14:04.306 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 188], 00:14:04.306 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 225], 99.95th=[ 231], 00:14:04.306 | 99.99th=[ 343] 00:14:04.306 bw ( KiB/s): min=25520, max=31096, per=24.07%, avg=26829.33, stdev=2211.71, samples=6 00:14:04.306 iops : min= 6380, max= 7774, avg=6707.33, stdev=552.93, samples=6 00:14:04.306 lat (usec) : 50=1.37%, 100=24.65%, 250=73.95%, 500=0.02% 00:14:04.306 lat (msec) : 20=0.01% 00:14:04.306 cpu : usr=1.82%, sys=6.03%, ctx=22787, majf=0, minf=1 00:14:04.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.306 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.306 issued rwts: total=22780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.306 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=282750: Fri Apr 19 04:05:18 2024 00:14:04.306 read: IOPS=6483, BW=25.3MiB/s (26.6MB/s)(73.7MiB/2909msec) 00:14:04.306 slat (usec): min=5, max=16883, avg= 8.67, stdev=135.67 00:14:04.306 clat (usec): min=52, max=324, avg=143.02, stdev=28.90 00:14:04.306 lat (usec): min=59, max=16972, avg=151.69, stdev=138.23 00:14:04.306 clat percentiles (usec): 00:14:04.306 | 1.00th=[ 78], 5.00th=[ 85], 10.00th=[ 93], 20.00th=[ 126], 00:14:04.306 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:14:04.306 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 178], 95.00th=[ 190], 00:14:04.306 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 219], 99.95th=[ 221], 00:14:04.306 | 99.99th=[ 318] 00:14:04.306 bw ( KiB/s): min=25488, max=26208, per=23.03%, avg=25676.80, stdev=299.79, samples=5 00:14:04.306 iops : min= 6372, max= 6552, avg=6419.20, stdev=74.95, samples=5 00:14:04.306 lat (usec) : 100=12.97%, 250=87.01%, 500=0.02% 00:14:04.306 cpu : usr=1.75%, sys=5.78%, ctx=18862, majf=0, minf=1 00:14:04.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.306 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.306 issued rwts: total=18860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.306 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=282755: Fri Apr 19 04:05:18 2024 00:14:04.306 read: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(110MiB/2736msec) 00:14:04.306 slat (nsec): min=5721, max=33234, avg=6903.95, stdev=771.01 00:14:04.306 clat (usec): min=54, max=321, avg=88.16, stdev=11.33 00:14:04.306 lat (usec): min=60, max=328, avg=95.06, stdev=11.39 00:14:04.306 clat percentiles (usec): 00:14:04.306 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 82], 00:14:04.306 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 89], 00:14:04.306 | 70.00th=[ 91], 80.00th=[ 93], 90.00th=[ 96], 95.00th=[ 99], 00:14:04.306 | 99.00th=[ 139], 99.50th=[ 165], 99.90th=[ 186], 99.95th=[ 188], 00:14:04.306 | 99.99th=[ 198] 00:14:04.306 bw ( KiB/s): min=38992, max=42400, per=37.38%, avg=41667.20, stdev=1497.65, samples=5 00:14:04.306 iops : min= 9748, max=10600, avg=10416.80, stdev=374.41, samples=5 00:14:04.306 lat (usec) : 100=95.55%, 250=4.45%, 500=0.01% 00:14:04.306 cpu : usr=2.67%, sys=8.45%, ctx=28187, majf=0, minf=2 00:14:04.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.306 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.306 issued rwts: total=28186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.306 00:14:04.306 Run status group 0 (all jobs): 00:14:04.306 READ: bw=109MiB/s (114MB/s), 25.3MiB/s-40.2MiB/s (26.6MB/s-42.2MB/s), io=352MiB (370MB), run=2736-3237msec 00:14:04.306 00:14:04.306 Disk stats (read/write): 00:14:04.306 nvme0n1: ios=18671/0, merge=0/0, ticks=2600/0, in_queue=2600, util=94.69% 00:14:04.306 nvme0n2: ios=20984/0, merge=0/0, ticks=2735/0, in_queue=2735, util=93.70% 00:14:04.306 nvme0n3: ios=18714/0, merge=0/0, ticks=2611/0, in_queue=2611, util=95.76% 00:14:04.306 nvme0n4: ios=27224/0, merge=0/0, ticks=2262/0, in_queue=2262, util=96.42% 00:14:04.565 04:05:18 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.565 04:05:18 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:04.565 04:05:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.565 04:05:19 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:04.823 04:05:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.823 04:05:19 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:05.081 04:05:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:05.081 04:05:19 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:05.081 04:05:19 -- target/fio.sh@69 -- # fio_status=0 00:14:05.081 04:05:19 -- target/fio.sh@70 -- # wait 282427 00:14:05.081 04:05:19 -- target/fio.sh@70 -- # fio_status=4 00:14:05.081 04:05:19 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.016 04:05:20 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.016 04:05:20 -- common/autotest_common.sh@1205 -- # local i=0 00:14:06.016 04:05:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:06.016 04:05:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.016 04:05:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:06.016 04:05:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.016 04:05:20 -- common/autotest_common.sh@1217 -- # return 0 00:14:06.016 04:05:20 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:06.016 04:05:20 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:06.016 nvmf hotplug test: fio failed as expected 00:14:06.016 04:05:20 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.275 04:05:20 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:06.275 04:05:20 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:06.275 04:05:20 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:06.275 04:05:20 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:06.275 04:05:20 -- target/fio.sh@91 -- # nvmftestfini 00:14:06.275 04:05:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:06.275 04:05:20 -- nvmf/common.sh@117 -- # sync 00:14:06.275 04:05:20 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:06.275 04:05:20 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:06.275 04:05:20 -- nvmf/common.sh@120 -- # set +e 00:14:06.275 04:05:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.275 04:05:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:06.275 rmmod nvme_rdma 00:14:06.275 rmmod nvme_fabrics 00:14:06.275 04:05:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.275 04:05:20 -- nvmf/common.sh@124 -- # set -e 00:14:06.275 04:05:20 -- nvmf/common.sh@125 -- # return 0 00:14:06.275 04:05:20 -- nvmf/common.sh@478 -- # '[' -n 279367 ']' 00:14:06.275 04:05:20 -- nvmf/common.sh@479 -- # killprocess 279367 00:14:06.275 04:05:20 -- common/autotest_common.sh@936 -- # '[' -z 279367 ']' 00:14:06.275 04:05:20 -- common/autotest_common.sh@940 -- # kill -0 279367 00:14:06.275 04:05:20 -- common/autotest_common.sh@941 -- # uname 00:14:06.275 04:05:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:06.275 04:05:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 279367 00:14:06.275 04:05:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:06.275 04:05:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:06.275 04:05:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 279367' 00:14:06.275 killing process with pid 279367 00:14:06.275 04:05:20 -- common/autotest_common.sh@955 -- # kill 279367 00:14:06.275 04:05:20 -- common/autotest_common.sh@960 -- # wait 279367 00:14:06.534 04:05:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:06.534 04:05:21 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:14:06.534 00:14:06.534 real 0m24.201s 00:14:06.534 user 2m0.686s 00:14:06.534 sys 0m8.192s 00:14:06.534 04:05:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:06.534 04:05:21 -- common/autotest_common.sh@10 -- # set +x 00:14:06.534 ************************************ 00:14:06.534 END TEST nvmf_fio_target 00:14:06.534 ************************************ 00:14:06.792 04:05:21 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:06.792 04:05:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:06.792 04:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.792 04:05:21 -- common/autotest_common.sh@10 -- # set +x 00:14:06.792 ************************************ 00:14:06.792 START TEST nvmf_bdevio 00:14:06.792 ************************************ 00:14:06.792 04:05:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:06.792 * Looking for test storage... 00:14:06.792 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:06.792 04:05:21 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.792 04:05:21 -- nvmf/common.sh@7 -- # uname -s 00:14:07.051 04:05:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.051 04:05:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.051 04:05:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.051 04:05:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.051 04:05:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.051 04:05:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.051 04:05:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.051 04:05:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.051 04:05:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.051 04:05:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.051 04:05:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:07.051 04:05:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:07.051 04:05:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.051 04:05:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.051 04:05:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.051 04:05:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.051 04:05:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:07.051 04:05:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.051 04:05:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.051 04:05:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.051 04:05:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.052 04:05:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.052 04:05:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.052 04:05:21 -- paths/export.sh@5 -- # export PATH 00:14:07.052 04:05:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.052 04:05:21 -- nvmf/common.sh@47 -- # : 0 00:14:07.052 04:05:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.052 04:05:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.052 04:05:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.052 04:05:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.052 04:05:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.052 04:05:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.052 04:05:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.052 04:05:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.052 04:05:21 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.052 04:05:21 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.052 04:05:21 -- target/bdevio.sh@14 -- # nvmftestinit 00:14:07.052 04:05:21 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:14:07.052 04:05:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.052 04:05:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:07.052 04:05:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:07.052 04:05:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:07.052 04:05:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.052 04:05:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.052 04:05:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.052 04:05:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:07.052 04:05:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:07.052 04:05:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.052 04:05:21 -- common/autotest_common.sh@10 -- # set +x 00:14:12.317 04:05:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:12.317 04:05:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.317 04:05:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.317 04:05:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.317 04:05:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.317 04:05:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.317 04:05:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.317 04:05:26 -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.317 04:05:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.317 04:05:26 -- nvmf/common.sh@296 -- # e810=() 00:14:12.317 04:05:26 -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.317 04:05:26 -- nvmf/common.sh@297 -- # x722=() 00:14:12.317 04:05:26 -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.317 04:05:26 -- nvmf/common.sh@298 -- # mlx=() 00:14:12.317 04:05:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.317 04:05:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.317 04:05:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.317 04:05:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.317 04:05:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.317 04:05:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.317 04:05:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.317 04:05:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.317 04:05:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.317 04:05:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.317 04:05:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.317 04:05:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.317 04:05:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.317 04:05:26 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:12.317 04:05:26 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:12.317 04:05:26 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:12.317 04:05:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.317 04:05:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.317 04:05:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:12.317 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:12.317 04:05:26 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:12.317 04:05:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.317 04:05:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:12.317 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:12.317 04:05:26 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:12.317 04:05:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.317 04:05:26 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.317 04:05:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.317 04:05:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:12.317 04:05:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.317 04:05:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:12.317 Found net devices under 0000:18:00.0: mlx_0_0 00:14:12.317 04:05:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.317 04:05:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.317 04:05:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.317 04:05:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:12.317 04:05:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.317 04:05:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:12.317 Found net devices under 0000:18:00.1: mlx_0_1 00:14:12.317 04:05:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.317 04:05:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:12.317 04:05:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:12.317 04:05:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:12.317 04:05:26 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:12.317 04:05:26 -- nvmf/common.sh@58 -- # uname 00:14:12.317 04:05:26 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:12.317 04:05:26 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:12.317 04:05:26 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:12.317 04:05:26 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:12.317 04:05:26 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:12.317 04:05:26 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:12.317 04:05:26 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:12.317 04:05:26 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:12.317 04:05:26 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:12.317 04:05:26 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:12.317 04:05:26 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:12.317 04:05:26 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:12.317 04:05:26 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:12.317 04:05:26 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:12.317 04:05:26 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:12.317 04:05:26 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:12.317 04:05:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.317 04:05:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.317 04:05:26 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:12.317 04:05:26 -- nvmf/common.sh@105 -- # continue 2 00:14:12.317 04:05:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.317 04:05:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.317 04:05:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.317 04:05:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:12.317 04:05:26 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:12.317 04:05:26 -- nvmf/common.sh@105 -- # continue 2 00:14:12.317 04:05:26 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:12.317 04:05:26 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:12.317 04:05:26 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:12.317 04:05:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:12.317 04:05:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.318 04:05:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.318 04:05:26 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:12.318 04:05:26 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:12.318 04:05:26 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:12.318 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:12.318 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:12.318 altname enp24s0f0np0 00:14:12.318 altname ens785f0np0 00:14:12.318 inet 192.168.100.8/24 scope global mlx_0_0 00:14:12.318 valid_lft forever preferred_lft forever 00:14:12.318 04:05:26 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:12.318 04:05:26 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:12.318 04:05:26 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:12.318 04:05:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:12.318 04:05:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.318 04:05:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.318 04:05:26 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:12.318 04:05:26 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:12.318 04:05:26 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:12.318 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:12.318 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:12.318 altname enp24s0f1np1 00:14:12.318 altname ens785f1np1 00:14:12.318 inet 192.168.100.9/24 scope global mlx_0_1 00:14:12.318 valid_lft forever preferred_lft forever 00:14:12.318 04:05:26 -- nvmf/common.sh@411 -- # return 0 00:14:12.318 04:05:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:12.318 04:05:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:12.318 04:05:26 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:12.318 04:05:26 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:12.318 04:05:26 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:12.318 04:05:26 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:12.318 04:05:26 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:12.318 04:05:26 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:12.318 04:05:26 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:12.318 04:05:26 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:12.318 04:05:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.318 04:05:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.318 04:05:26 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:12.318 04:05:26 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:12.318 04:05:26 -- nvmf/common.sh@105 -- # continue 2 00:14:12.318 04:05:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.318 04:05:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.318 04:05:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:12.318 04:05:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.318 04:05:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:12.318 04:05:26 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:12.318 04:05:26 -- nvmf/common.sh@105 -- # continue 2 00:14:12.318 04:05:26 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:12.318 04:05:26 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:12.318 04:05:26 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:12.318 04:05:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:12.318 04:05:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.318 04:05:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.318 04:05:26 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:12.318 04:05:26 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:12.318 04:05:26 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:12.318 04:05:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:12.318 04:05:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.318 04:05:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.318 04:05:26 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:12.318 192.168.100.9' 00:14:12.318 04:05:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:12.318 192.168.100.9' 00:14:12.318 04:05:26 -- nvmf/common.sh@446 -- # head -n 1 00:14:12.318 04:05:26 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:12.318 04:05:26 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:12.318 192.168.100.9' 00:14:12.318 04:05:26 -- nvmf/common.sh@447 -- # tail -n +2 00:14:12.318 04:05:26 -- nvmf/common.sh@447 -- # head -n 1 00:14:12.318 04:05:26 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:12.318 04:05:26 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:12.318 04:05:26 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:12.318 04:05:26 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:12.318 04:05:26 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:12.318 04:05:26 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:12.318 04:05:26 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:12.318 04:05:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:12.318 04:05:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:12.318 04:05:26 -- common/autotest_common.sh@10 -- # set +x 00:14:12.318 04:05:26 -- nvmf/common.sh@470 -- # nvmfpid=286903 00:14:12.318 04:05:26 -- nvmf/common.sh@471 -- # waitforlisten 286903 00:14:12.318 04:05:26 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:12.318 04:05:26 -- common/autotest_common.sh@817 -- # '[' -z 286903 ']' 00:14:12.318 04:05:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.318 04:05:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:12.318 04:05:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.318 04:05:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:12.318 04:05:26 -- common/autotest_common.sh@10 -- # set +x 00:14:12.318 [2024-04-19 04:05:26.287586] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:12.318 [2024-04-19 04:05:26.287634] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.318 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.318 [2024-04-19 04:05:26.340612] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.318 [2024-04-19 04:05:26.413944] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.318 [2024-04-19 04:05:26.413978] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.318 [2024-04-19 04:05:26.413984] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.318 [2024-04-19 04:05:26.413989] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.318 [2024-04-19 04:05:26.413994] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.318 [2024-04-19 04:05:26.414117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:12.318 [2024-04-19 04:05:26.414144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:12.318 [2024-04-19 04:05:26.414235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.318 [2024-04-19 04:05:26.414236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:12.575 04:05:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:12.575 04:05:27 -- common/autotest_common.sh@850 -- # return 0 00:14:12.575 04:05:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:12.575 04:05:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:12.575 04:05:27 -- common/autotest_common.sh@10 -- # set +x 00:14:12.575 04:05:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.575 04:05:27 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:12.575 04:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.575 04:05:27 -- common/autotest_common.sh@10 -- # set +x 00:14:12.832 [2024-04-19 04:05:27.123059] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ca2fa0/0x1ca7490) succeed. 00:14:12.832 [2024-04-19 04:05:27.132246] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ca4590/0x1ce8b20) succeed. 00:14:12.832 04:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.832 04:05:27 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:12.832 04:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.832 04:05:27 -- common/autotest_common.sh@10 -- # set +x 00:14:12.832 Malloc0 00:14:12.832 04:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.832 04:05:27 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:12.832 04:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.832 04:05:27 -- common/autotest_common.sh@10 -- # set +x 00:14:12.832 04:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.832 04:05:27 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:12.832 04:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.832 04:05:27 -- common/autotest_common.sh@10 -- # set +x 00:14:12.832 04:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.832 04:05:27 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:12.832 04:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.832 04:05:27 -- common/autotest_common.sh@10 -- # set +x 00:14:12.832 [2024-04-19 04:05:27.276987] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:12.832 04:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.832 04:05:27 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:12.832 04:05:27 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:12.832 04:05:27 -- nvmf/common.sh@521 -- # config=() 00:14:12.832 04:05:27 -- nvmf/common.sh@521 -- # local subsystem config 00:14:12.832 04:05:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:12.833 04:05:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:12.833 { 00:14:12.833 "params": { 00:14:12.833 "name": "Nvme$subsystem", 00:14:12.833 "trtype": "$TEST_TRANSPORT", 00:14:12.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.833 "adrfam": "ipv4", 00:14:12.833 "trsvcid": "$NVMF_PORT", 00:14:12.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.833 "hdgst": ${hdgst:-false}, 00:14:12.833 "ddgst": ${ddgst:-false} 00:14:12.833 }, 00:14:12.833 "method": "bdev_nvme_attach_controller" 00:14:12.833 } 00:14:12.833 EOF 00:14:12.833 )") 00:14:12.833 04:05:27 -- nvmf/common.sh@543 -- # cat 00:14:12.833 04:05:27 -- nvmf/common.sh@545 -- # jq . 00:14:12.833 04:05:27 -- nvmf/common.sh@546 -- # IFS=, 00:14:12.833 04:05:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:12.833 "params": { 00:14:12.833 "name": "Nvme1", 00:14:12.833 "trtype": "rdma", 00:14:12.833 "traddr": "192.168.100.8", 00:14:12.833 "adrfam": "ipv4", 00:14:12.833 "trsvcid": "4420", 00:14:12.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.833 "hdgst": false, 00:14:12.833 "ddgst": false 00:14:12.833 }, 00:14:12.833 "method": "bdev_nvme_attach_controller" 00:14:12.833 }' 00:14:12.833 [2024-04-19 04:05:27.324821] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:12.833 [2024-04-19 04:05:27.324865] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid287183 ] 00:14:12.833 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.089 [2024-04-19 04:05:27.375973] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:13.089 [2024-04-19 04:05:27.445153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.090 [2024-04-19 04:05:27.445220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.090 [2024-04-19 04:05:27.445222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.348 I/O targets: 00:14:13.348 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:13.348 00:14:13.348 00:14:13.348 CUnit - A unit testing framework for C - Version 2.1-3 00:14:13.348 http://cunit.sourceforge.net/ 00:14:13.348 00:14:13.348 00:14:13.348 Suite: bdevio tests on: Nvme1n1 00:14:13.348 Test: blockdev write read block ...passed 00:14:13.348 Test: blockdev write zeroes read block ...passed 00:14:13.348 Test: blockdev write zeroes read no split ...passed 00:14:13.348 Test: blockdev write zeroes read split ...passed 00:14:13.348 Test: blockdev write zeroes read split partial ...passed 00:14:13.348 Test: blockdev reset ...[2024-04-19 04:05:27.649099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:13.348 [2024-04-19 04:05:27.671201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:13.348 [2024-04-19 04:05:27.698765] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:13.348 passed 00:14:13.348 Test: blockdev write read 8 blocks ...passed 00:14:13.348 Test: blockdev write read size > 128k ...passed 00:14:13.348 Test: blockdev write read invalid size ...passed 00:14:13.348 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:13.348 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:13.348 Test: blockdev write read max offset ...passed 00:14:13.348 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:13.348 Test: blockdev writev readv 8 blocks ...passed 00:14:13.348 Test: blockdev writev readv 30 x 1block ...passed 00:14:13.348 Test: blockdev writev readv block ...passed 00:14:13.348 Test: blockdev writev readv size > 128k ...passed 00:14:13.348 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:13.348 Test: blockdev comparev and writev ...[2024-04-19 04:05:27.701449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.348 [2024-04-19 04:05:27.701476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:13.348 [2024-04-19 04:05:27.701485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.348 [2024-04-19 04:05:27.701491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:13.348 [2024-04-19 04:05:27.701659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.348 [2024-04-19 04:05:27.701667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:13.348 [2024-04-19 04:05:27.701674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.348 [2024-04-19 04:05:27.701680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:13.348 [2024-04-19 04:05:27.701824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.348 [2024-04-19 04:05:27.701832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:13.348 [2024-04-19 04:05:27.701839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.348 [2024-04-19 04:05:27.701845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:13.348 [2024-04-19 04:05:27.701978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.348 [2024-04-19 04:05:27.701985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:13.348 [2024-04-19 04:05:27.701992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:13.348 [2024-04-19 04:05:27.701998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:13.348 passed 00:14:13.348 Test: blockdev nvme passthru rw ...passed 00:14:13.348 Test: blockdev nvme passthru vendor specific ...[2024-04-19 04:05:27.702231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:13.348 [2024-04-19 04:05:27.702240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:13.348 [2024-04-19 04:05:27.702274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:13.348 [2024-04-19 04:05:27.702281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:13.348 [2024-04-19 04:05:27.702317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:13.348 [2024-04-19 04:05:27.702324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:13.348 [2024-04-19 04:05:27.702363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:13.348 [2024-04-19 04:05:27.702370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:13.348 passed 00:14:13.348 Test: blockdev nvme admin passthru ...passed 00:14:13.348 Test: blockdev copy ...passed 00:14:13.348 00:14:13.348 Run Summary: Type Total Ran Passed Failed Inactive 00:14:13.348 suites 1 1 n/a 0 0 00:14:13.348 tests 23 23 23 0 0 00:14:13.348 asserts 152 152 152 0 n/a 00:14:13.348 00:14:13.348 Elapsed time = 0.171 seconds 00:14:13.605 04:05:27 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.606 04:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.606 04:05:27 -- common/autotest_common.sh@10 -- # set +x 00:14:13.606 04:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.606 04:05:27 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:13.606 04:05:27 -- target/bdevio.sh@30 -- # nvmftestfini 00:14:13.606 04:05:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:13.606 04:05:27 -- nvmf/common.sh@117 -- # sync 00:14:13.606 04:05:27 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:13.606 04:05:27 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:13.606 04:05:27 -- nvmf/common.sh@120 -- # set +e 00:14:13.606 04:05:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:13.606 04:05:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:13.606 rmmod nvme_rdma 00:14:13.606 rmmod nvme_fabrics 00:14:13.606 04:05:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:13.606 04:05:27 -- nvmf/common.sh@124 -- # set -e 00:14:13.606 04:05:27 -- nvmf/common.sh@125 -- # return 0 00:14:13.606 04:05:27 -- nvmf/common.sh@478 -- # '[' -n 286903 ']' 00:14:13.606 04:05:27 -- nvmf/common.sh@479 -- # killprocess 286903 00:14:13.606 04:05:27 -- common/autotest_common.sh@936 -- # '[' -z 286903 ']' 00:14:13.606 04:05:27 -- common/autotest_common.sh@940 -- # kill -0 286903 00:14:13.606 04:05:27 -- common/autotest_common.sh@941 -- # uname 00:14:13.606 04:05:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:13.606 04:05:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 286903 00:14:13.606 04:05:28 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:14:13.606 04:05:28 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:14:13.606 04:05:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 286903' 00:14:13.606 killing process with pid 286903 00:14:13.606 04:05:28 -- common/autotest_common.sh@955 -- # kill 286903 00:14:13.606 04:05:28 -- common/autotest_common.sh@960 -- # wait 286903 00:14:13.864 04:05:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:13.864 04:05:28 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:14:13.864 00:14:13.864 real 0m7.068s 00:14:13.864 user 0m9.855s 00:14:13.864 sys 0m4.216s 00:14:13.864 04:05:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:13.864 04:05:28 -- common/autotest_common.sh@10 -- # set +x 00:14:13.864 ************************************ 00:14:13.864 END TEST nvmf_bdevio 00:14:13.864 ************************************ 00:14:13.864 04:05:28 -- nvmf/nvmf.sh@58 -- # '[' rdma = tcp ']' 00:14:13.864 04:05:28 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:14:13.864 04:05:28 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:14:13.864 04:05:28 -- nvmf/nvmf.sh@71 -- # '[' rdma = tcp ']' 00:14:13.864 04:05:28 -- nvmf/nvmf.sh@77 -- # [[ rdma == \r\d\m\a ]] 00:14:13.864 04:05:28 -- nvmf/nvmf.sh@78 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:14:13.864 04:05:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:13.864 04:05:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:13.864 04:05:28 -- common/autotest_common.sh@10 -- # set +x 00:14:14.124 ************************************ 00:14:14.124 START TEST nvmf_device_removal 00:14:14.124 ************************************ 00:14:14.124 04:05:28 -- common/autotest_common.sh@1111 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:14:14.124 * Looking for test storage... 00:14:14.124 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:14.124 04:05:28 -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:14:14.124 04:05:28 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:14.124 04:05:28 -- common/autotest_common.sh@34 -- # set -e 00:14:14.124 04:05:28 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:14.124 04:05:28 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:14.124 04:05:28 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:14:14.124 04:05:28 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:14.124 04:05:28 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:14:14.124 04:05:28 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:14.124 04:05:28 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:14.124 04:05:28 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:14.124 04:05:28 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:14.124 04:05:28 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:14.124 04:05:28 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:14.124 04:05:28 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:14.124 04:05:28 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:14.124 04:05:28 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:14.124 04:05:28 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:14.124 04:05:28 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:14.124 04:05:28 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:14.124 04:05:28 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:14.124 04:05:28 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:14.124 04:05:28 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:14.124 04:05:28 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:14.124 04:05:28 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:14.124 04:05:28 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:14.124 04:05:28 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:14:14.124 04:05:28 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:14.124 04:05:28 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:14.124 04:05:28 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:14.124 04:05:28 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:14.124 04:05:28 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:14.124 04:05:28 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:14.124 04:05:28 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:14.124 04:05:28 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:14.124 04:05:28 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:14:14.124 04:05:28 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:14:14.124 04:05:28 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:14:14.124 04:05:28 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:14:14.124 04:05:28 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:14:14.124 04:05:28 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:14:14.124 04:05:28 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:14:14.124 04:05:28 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:14:14.124 04:05:28 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:14:14.124 04:05:28 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:14:14.124 04:05:28 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:14:14.124 04:05:28 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:14:14.124 04:05:28 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:14:14.124 04:05:28 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:14:14.124 04:05:28 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:14:14.124 04:05:28 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:14:14.124 04:05:28 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:14.124 04:05:28 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:14:14.124 04:05:28 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:14:14.124 04:05:28 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:14:14.124 04:05:28 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:14.124 04:05:28 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:14:14.124 04:05:28 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:14:14.124 04:05:28 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:14:14.124 04:05:28 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:14:14.124 04:05:28 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:14:14.124 04:05:28 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:14:14.124 04:05:28 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:14:14.124 04:05:28 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:14:14.124 04:05:28 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:14:14.124 04:05:28 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:14:14.124 04:05:28 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:14:14.124 04:05:28 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:14:14.124 04:05:28 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:14:14.124 04:05:28 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:14:14.124 04:05:28 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:14:14.124 04:05:28 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:14:14.124 04:05:28 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:14:14.124 04:05:28 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:14:14.125 04:05:28 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:14:14.125 04:05:28 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:14.125 04:05:28 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:14:14.125 04:05:28 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:14:14.125 04:05:28 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:14:14.125 04:05:28 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:14:14.125 04:05:28 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:14:14.125 04:05:28 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:14:14.125 04:05:28 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:14:14.125 04:05:28 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:14:14.125 04:05:28 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:14:14.125 04:05:28 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:14:14.125 04:05:28 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:14:14.125 04:05:28 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:14.125 04:05:28 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:14:14.125 04:05:28 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:14:14.125 04:05:28 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:14:14.125 04:05:28 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:14:14.125 04:05:28 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:14:14.125 04:05:28 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:14:14.125 04:05:28 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:14:14.125 04:05:28 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:14:14.125 04:05:28 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:14:14.125 04:05:28 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:14:14.125 04:05:28 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:14.125 04:05:28 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:14.125 04:05:28 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:14.125 04:05:28 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:14.125 04:05:28 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:14.125 04:05:28 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:14.125 04:05:28 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:14:14.125 04:05:28 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:14.125 #define SPDK_CONFIG_H 00:14:14.125 #define SPDK_CONFIG_APPS 1 00:14:14.125 #define SPDK_CONFIG_ARCH native 00:14:14.125 #undef SPDK_CONFIG_ASAN 00:14:14.125 #undef SPDK_CONFIG_AVAHI 00:14:14.125 #undef SPDK_CONFIG_CET 00:14:14.125 #define SPDK_CONFIG_COVERAGE 1 00:14:14.125 #define SPDK_CONFIG_CROSS_PREFIX 00:14:14.125 #undef SPDK_CONFIG_CRYPTO 00:14:14.125 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:14.125 #undef SPDK_CONFIG_CUSTOMOCF 00:14:14.125 #undef SPDK_CONFIG_DAOS 00:14:14.125 #define SPDK_CONFIG_DAOS_DIR 00:14:14.125 #define SPDK_CONFIG_DEBUG 1 00:14:14.125 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:14.125 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:14:14.125 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:14.125 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:14.125 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:14.125 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:14:14.125 #define SPDK_CONFIG_EXAMPLES 1 00:14:14.125 #undef SPDK_CONFIG_FC 00:14:14.125 #define SPDK_CONFIG_FC_PATH 00:14:14.125 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:14.125 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:14.125 #undef SPDK_CONFIG_FUSE 00:14:14.125 #undef SPDK_CONFIG_FUZZER 00:14:14.125 #define SPDK_CONFIG_FUZZER_LIB 00:14:14.125 #undef SPDK_CONFIG_GOLANG 00:14:14.125 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:14.125 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:14.125 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:14.125 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:14:14.125 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:14.125 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:14.125 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:14.125 #define SPDK_CONFIG_IDXD 1 00:14:14.125 #undef SPDK_CONFIG_IDXD_KERNEL 00:14:14.125 #undef SPDK_CONFIG_IPSEC_MB 00:14:14.125 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:14.125 #define SPDK_CONFIG_ISAL 1 00:14:14.125 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:14.125 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:14.125 #define SPDK_CONFIG_LIBDIR 00:14:14.125 #undef SPDK_CONFIG_LTO 00:14:14.125 #define SPDK_CONFIG_MAX_LCORES 00:14:14.125 #define SPDK_CONFIG_NVME_CUSE 1 00:14:14.125 #undef SPDK_CONFIG_OCF 00:14:14.125 #define SPDK_CONFIG_OCF_PATH 00:14:14.125 #define SPDK_CONFIG_OPENSSL_PATH 00:14:14.125 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:14.125 #define SPDK_CONFIG_PGO_DIR 00:14:14.125 #undef SPDK_CONFIG_PGO_USE 00:14:14.125 #define SPDK_CONFIG_PREFIX /usr/local 00:14:14.125 #undef SPDK_CONFIG_RAID5F 00:14:14.125 #undef SPDK_CONFIG_RBD 00:14:14.125 #define SPDK_CONFIG_RDMA 1 00:14:14.125 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:14.125 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:14.125 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:14.125 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:14.125 #define SPDK_CONFIG_SHARED 1 00:14:14.125 #undef SPDK_CONFIG_SMA 00:14:14.125 #define SPDK_CONFIG_TESTS 1 00:14:14.125 #undef SPDK_CONFIG_TSAN 00:14:14.125 #define SPDK_CONFIG_UBLK 1 00:14:14.125 #define SPDK_CONFIG_UBSAN 1 00:14:14.125 #undef SPDK_CONFIG_UNIT_TESTS 00:14:14.125 #undef SPDK_CONFIG_URING 00:14:14.125 #define SPDK_CONFIG_URING_PATH 00:14:14.125 #undef SPDK_CONFIG_URING_ZNS 00:14:14.125 #undef SPDK_CONFIG_USDT 00:14:14.125 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:14.125 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:14.125 #undef SPDK_CONFIG_VFIO_USER 00:14:14.125 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:14.125 #define SPDK_CONFIG_VHOST 1 00:14:14.125 #define SPDK_CONFIG_VIRTIO 1 00:14:14.125 #undef SPDK_CONFIG_VTUNE 00:14:14.125 #define SPDK_CONFIG_VTUNE_DIR 00:14:14.125 #define SPDK_CONFIG_WERROR 1 00:14:14.125 #define SPDK_CONFIG_WPDK_DIR 00:14:14.125 #undef SPDK_CONFIG_XNVME 00:14:14.125 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:14.125 04:05:28 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:14.125 04:05:28 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:14.125 04:05:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.125 04:05:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.125 04:05:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.125 04:05:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.125 04:05:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.125 04:05:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.125 04:05:28 -- paths/export.sh@5 -- # export PATH 00:14:14.125 04:05:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.125 04:05:28 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:14:14.125 04:05:28 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:14:14.125 04:05:28 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:14:14.125 04:05:28 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:14:14.125 04:05:28 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:14.125 04:05:28 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:14:14.125 04:05:28 -- pm/common@67 -- # TEST_TAG=N/A 00:14:14.125 04:05:28 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:14:14.125 04:05:28 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:14:14.125 04:05:28 -- pm/common@71 -- # uname -s 00:14:14.125 04:05:28 -- pm/common@71 -- # PM_OS=Linux 00:14:14.125 04:05:28 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:14.125 04:05:28 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:14:14.125 04:05:28 -- pm/common@76 -- # [[ Linux == Linux ]] 00:14:14.125 04:05:28 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:14:14.125 04:05:28 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:14:14.125 04:05:28 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:14.125 04:05:28 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:14.125 04:05:28 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:14:14.125 04:05:28 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:14:14.125 04:05:28 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:14:14.125 04:05:28 -- common/autotest_common.sh@57 -- # : 0 00:14:14.125 04:05:28 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:14:14.125 04:05:28 -- common/autotest_common.sh@61 -- # : 0 00:14:14.125 04:05:28 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:14.126 04:05:28 -- common/autotest_common.sh@63 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:14:14.126 04:05:28 -- common/autotest_common.sh@65 -- # : 1 00:14:14.126 04:05:28 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:14.126 04:05:28 -- common/autotest_common.sh@67 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:14:14.126 04:05:28 -- common/autotest_common.sh@69 -- # : 00:14:14.126 04:05:28 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:14:14.126 04:05:28 -- common/autotest_common.sh@71 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:14:14.126 04:05:28 -- common/autotest_common.sh@73 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:14:14.126 04:05:28 -- common/autotest_common.sh@75 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:14:14.126 04:05:28 -- common/autotest_common.sh@77 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:14.126 04:05:28 -- common/autotest_common.sh@79 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:14:14.126 04:05:28 -- common/autotest_common.sh@81 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:14:14.126 04:05:28 -- common/autotest_common.sh@83 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:14:14.126 04:05:28 -- common/autotest_common.sh@85 -- # : 1 00:14:14.126 04:05:28 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:14:14.126 04:05:28 -- common/autotest_common.sh@87 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:14:14.126 04:05:28 -- common/autotest_common.sh@89 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:14:14.126 04:05:28 -- common/autotest_common.sh@91 -- # : 1 00:14:14.126 04:05:28 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:14:14.126 04:05:28 -- common/autotest_common.sh@93 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:14:14.126 04:05:28 -- common/autotest_common.sh@95 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:14.126 04:05:28 -- common/autotest_common.sh@97 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:14:14.126 04:05:28 -- common/autotest_common.sh@99 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:14:14.126 04:05:28 -- common/autotest_common.sh@101 -- # : rdma 00:14:14.126 04:05:28 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:14.126 04:05:28 -- common/autotest_common.sh@103 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:14:14.126 04:05:28 -- common/autotest_common.sh@105 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:14:14.126 04:05:28 -- common/autotest_common.sh@107 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:14:14.126 04:05:28 -- common/autotest_common.sh@109 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:14:14.126 04:05:28 -- common/autotest_common.sh@111 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:14:14.126 04:05:28 -- common/autotest_common.sh@113 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:14:14.126 04:05:28 -- common/autotest_common.sh@115 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:14:14.126 04:05:28 -- common/autotest_common.sh@117 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:14.126 04:05:28 -- common/autotest_common.sh@119 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:14:14.126 04:05:28 -- common/autotest_common.sh@121 -- # : 1 00:14:14.126 04:05:28 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:14:14.126 04:05:28 -- common/autotest_common.sh@123 -- # : 00:14:14.126 04:05:28 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:14.126 04:05:28 -- common/autotest_common.sh@125 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:14:14.126 04:05:28 -- common/autotest_common.sh@127 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:14:14.126 04:05:28 -- common/autotest_common.sh@129 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:14:14.126 04:05:28 -- common/autotest_common.sh@131 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:14:14.126 04:05:28 -- common/autotest_common.sh@133 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:14:14.126 04:05:28 -- common/autotest_common.sh@135 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:14:14.126 04:05:28 -- common/autotest_common.sh@137 -- # : 00:14:14.126 04:05:28 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:14:14.126 04:05:28 -- common/autotest_common.sh@139 -- # : true 00:14:14.126 04:05:28 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:14:14.126 04:05:28 -- common/autotest_common.sh@141 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:14:14.126 04:05:28 -- common/autotest_common.sh@143 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:14:14.126 04:05:28 -- common/autotest_common.sh@145 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:14:14.126 04:05:28 -- common/autotest_common.sh@147 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:14:14.126 04:05:28 -- common/autotest_common.sh@149 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:14:14.126 04:05:28 -- common/autotest_common.sh@151 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:14:14.126 04:05:28 -- common/autotest_common.sh@153 -- # : mlx5 00:14:14.126 04:05:28 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:14:14.126 04:05:28 -- common/autotest_common.sh@155 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:14:14.126 04:05:28 -- common/autotest_common.sh@157 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:14:14.126 04:05:28 -- common/autotest_common.sh@159 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:14:14.126 04:05:28 -- common/autotest_common.sh@161 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:14:14.126 04:05:28 -- common/autotest_common.sh@163 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:14:14.126 04:05:28 -- common/autotest_common.sh@166 -- # : 00:14:14.126 04:05:28 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:14:14.126 04:05:28 -- common/autotest_common.sh@168 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:14:14.126 04:05:28 -- common/autotest_common.sh@170 -- # : 0 00:14:14.126 04:05:28 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:14.126 04:05:28 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:14:14.126 04:05:28 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:14:14.126 04:05:28 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:14:14.126 04:05:28 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:14:14.126 04:05:28 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.126 04:05:28 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.126 04:05:28 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.126 04:05:28 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.126 04:05:28 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:14.126 04:05:28 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:14.126 04:05:28 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:14:14.126 04:05:28 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:14:14.126 04:05:28 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:14.126 04:05:28 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:14:14.126 04:05:28 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:14.126 04:05:28 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:14.126 04:05:28 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:14.127 04:05:28 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:14.127 04:05:28 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:14.127 04:05:28 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:14:14.127 04:05:28 -- common/autotest_common.sh@199 -- # cat 00:14:14.127 04:05:28 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:14:14.127 04:05:28 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:14.127 04:05:28 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:14.127 04:05:28 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:14.127 04:05:28 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:14.127 04:05:28 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:14:14.127 04:05:28 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:14:14.127 04:05:28 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:14:14.127 04:05:28 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:14:14.127 04:05:28 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:14:14.127 04:05:28 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:14:14.127 04:05:28 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:14.127 04:05:28 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:14.127 04:05:28 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:14.127 04:05:28 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:14.127 04:05:28 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:14.127 04:05:28 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:14.127 04:05:28 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:14.127 04:05:28 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:14.127 04:05:28 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:14:14.127 04:05:28 -- common/autotest_common.sh@252 -- # export valgrind= 00:14:14.127 04:05:28 -- common/autotest_common.sh@252 -- # valgrind= 00:14:14.127 04:05:28 -- common/autotest_common.sh@258 -- # uname -s 00:14:14.127 04:05:28 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:14:14.127 04:05:28 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:14:14.127 04:05:28 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:14:14.127 04:05:28 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:14:14.127 04:05:28 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:14:14.127 04:05:28 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:14:14.127 04:05:28 -- common/autotest_common.sh@268 -- # MAKE=make 00:14:14.127 04:05:28 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j112 00:14:14.127 04:05:28 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:14:14.127 04:05:28 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:14:14.127 04:05:28 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:14:14.127 04:05:28 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:14:14.127 04:05:28 -- common/autotest_common.sh@289 -- # for i in "$@" 00:14:14.127 04:05:28 -- common/autotest_common.sh@290 -- # case "$i" in 00:14:14.127 04:05:28 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:14:14.127 04:05:28 -- common/autotest_common.sh@307 -- # [[ -z 287442 ]] 00:14:14.127 04:05:28 -- common/autotest_common.sh@307 -- # kill -0 287442 00:14:14.127 04:05:28 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:14:14.127 04:05:28 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:14:14.127 04:05:28 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:14:14.127 04:05:28 -- common/autotest_common.sh@320 -- # local mount target_dir 00:14:14.127 04:05:28 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:14:14.127 04:05:28 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:14:14.127 04:05:28 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:14:14.127 04:05:28 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:14:14.127 04:05:28 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.Dr2Agr 00:14:14.127 04:05:28 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:14.127 04:05:28 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:14:14.127 04:05:28 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:14:14.127 04:05:28 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Dr2Agr/tests/target /tmp/spdk.Dr2Agr 00:14:14.127 04:05:28 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:14:14.127 04:05:28 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:14:14.386 04:05:28 -- common/autotest_common.sh@316 -- # df -T 00:14:14.386 04:05:28 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:14:14.386 04:05:28 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:14:14.386 04:05:28 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # avails["$mount"]=995516416 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:14:14.386 04:05:28 -- common/autotest_common.sh@352 -- # uses["$mount"]=4288913408 00:14:14.386 04:05:28 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # avails["$mount"]=90522828800 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # sizes["$mount"]=95554768896 00:14:14.386 04:05:28 -- common/autotest_common.sh@352 -- # uses["$mount"]=5031940096 00:14:14.386 04:05:28 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # avails["$mount"]=47764086784 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # sizes["$mount"]=47777382400 00:14:14.386 04:05:28 -- common/autotest_common.sh@352 -- # uses["$mount"]=13295616 00:14:14.386 04:05:28 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # avails["$mount"]=19087880192 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # sizes["$mount"]=19110957056 00:14:14.386 04:05:28 -- common/autotest_common.sh@352 -- # uses["$mount"]=23076864 00:14:14.386 04:05:28 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # avails["$mount"]=47777214464 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # sizes["$mount"]=47777386496 00:14:14.386 04:05:28 -- common/autotest_common.sh@352 -- # uses["$mount"]=172032 00:14:14.386 04:05:28 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:14:14.386 04:05:28 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # avails["$mount"]=9555472384 00:14:14.386 04:05:28 -- common/autotest_common.sh@351 -- # sizes["$mount"]=9555476480 00:14:14.386 04:05:28 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:14:14.386 04:05:28 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:14:14.386 04:05:28 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:14:14.386 * Looking for test storage... 00:14:14.386 04:05:28 -- common/autotest_common.sh@357 -- # local target_space new_size 00:14:14.387 04:05:28 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:14:14.387 04:05:28 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:14.387 04:05:28 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:14.387 04:05:28 -- common/autotest_common.sh@361 -- # mount=/ 00:14:14.387 04:05:28 -- common/autotest_common.sh@363 -- # target_space=90522828800 00:14:14.387 04:05:28 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:14:14.387 04:05:28 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:14:14.387 04:05:28 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:14:14.387 04:05:28 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:14:14.387 04:05:28 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:14:14.387 04:05:28 -- common/autotest_common.sh@370 -- # new_size=7246532608 00:14:14.387 04:05:28 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:14.387 04:05:28 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:14.387 04:05:28 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:14.387 04:05:28 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:14.387 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:14.387 04:05:28 -- common/autotest_common.sh@378 -- # return 0 00:14:14.387 04:05:28 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:14:14.387 04:05:28 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:14:14.387 04:05:28 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:14.387 04:05:28 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:14.387 04:05:28 -- common/autotest_common.sh@1673 -- # true 00:14:14.387 04:05:28 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:14:14.387 04:05:28 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:14:14.387 04:05:28 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:14:14.387 04:05:28 -- common/autotest_common.sh@27 -- # exec 00:14:14.387 04:05:28 -- common/autotest_common.sh@29 -- # exec 00:14:14.387 04:05:28 -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:14.387 04:05:28 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:14.387 04:05:28 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:14.387 04:05:28 -- common/autotest_common.sh@18 -- # set -x 00:14:14.387 04:05:28 -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.387 04:05:28 -- nvmf/common.sh@7 -- # uname -s 00:14:14.387 04:05:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.387 04:05:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.387 04:05:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.387 04:05:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.387 04:05:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.387 04:05:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.387 04:05:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.387 04:05:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.387 04:05:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.387 04:05:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.387 04:05:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:14.387 04:05:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:14.387 04:05:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.387 04:05:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.387 04:05:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.387 04:05:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.387 04:05:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:14.387 04:05:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.387 04:05:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.387 04:05:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.387 04:05:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.387 04:05:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.387 04:05:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.387 04:05:28 -- paths/export.sh@5 -- # export PATH 00:14:14.387 04:05:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.387 04:05:28 -- nvmf/common.sh@47 -- # : 0 00:14:14.387 04:05:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:14.387 04:05:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:14.387 04:05:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.387 04:05:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.387 04:05:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.387 04:05:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:14.387 04:05:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:14.387 04:05:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:14.387 04:05:28 -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:14:14.387 04:05:28 -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:14:14.387 04:05:28 -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.387 04:05:28 -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:14:14.387 04:05:28 -- target/device_removal.sh@18 -- # nvmftestinit 00:14:14.387 04:05:28 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:14:14.387 04:05:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.387 04:05:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:14.387 04:05:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:14.387 04:05:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:14.387 04:05:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.387 04:05:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.388 04:05:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.388 04:05:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:14.388 04:05:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:14.388 04:05:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:14.388 04:05:28 -- common/autotest_common.sh@10 -- # set +x 00:14:19.654 04:05:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:19.654 04:05:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:19.654 04:05:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:19.654 04:05:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:19.654 04:05:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:19.654 04:05:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:19.654 04:05:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:19.654 04:05:34 -- nvmf/common.sh@295 -- # net_devs=() 00:14:19.654 04:05:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:19.654 04:05:34 -- nvmf/common.sh@296 -- # e810=() 00:14:19.654 04:05:34 -- nvmf/common.sh@296 -- # local -ga e810 00:14:19.654 04:05:34 -- nvmf/common.sh@297 -- # x722=() 00:14:19.654 04:05:34 -- nvmf/common.sh@297 -- # local -ga x722 00:14:19.654 04:05:34 -- nvmf/common.sh@298 -- # mlx=() 00:14:19.654 04:05:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:19.654 04:05:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.654 04:05:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.654 04:05:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.654 04:05:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.654 04:05:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.654 04:05:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.654 04:05:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.654 04:05:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.654 04:05:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.655 04:05:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.655 04:05:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.655 04:05:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:19.655 04:05:34 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:19.655 04:05:34 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:19.655 04:05:34 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:19.655 04:05:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:19.655 04:05:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.655 04:05:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:19.655 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:19.655 04:05:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:19.655 04:05:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.655 04:05:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:19.655 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:19.655 04:05:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:19.655 04:05:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:19.655 04:05:34 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.655 04:05:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.655 04:05:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:19.655 04:05:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.655 04:05:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:19.655 Found net devices under 0000:18:00.0: mlx_0_0 00:14:19.655 04:05:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.655 04:05:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.655 04:05:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.655 04:05:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:19.655 04:05:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.655 04:05:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:19.655 Found net devices under 0000:18:00.1: mlx_0_1 00:14:19.655 04:05:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.655 04:05:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:19.655 04:05:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:19.655 04:05:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:19.655 04:05:34 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:19.655 04:05:34 -- nvmf/common.sh@58 -- # uname 00:14:19.655 04:05:34 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:19.655 04:05:34 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:19.655 04:05:34 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:19.655 04:05:34 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:19.655 04:05:34 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:19.655 04:05:34 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:19.655 04:05:34 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:19.655 04:05:34 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:19.655 04:05:34 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:19.655 04:05:34 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:19.655 04:05:34 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:19.655 04:05:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:19.655 04:05:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:19.655 04:05:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:19.655 04:05:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:19.655 04:05:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:19.655 04:05:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:19.655 04:05:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.655 04:05:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:19.655 04:05:34 -- nvmf/common.sh@105 -- # continue 2 00:14:19.655 04:05:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:19.655 04:05:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.655 04:05:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.655 04:05:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:19.655 04:05:34 -- nvmf/common.sh@105 -- # continue 2 00:14:19.655 04:05:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:19.655 04:05:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:19.655 04:05:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:19.655 04:05:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:19.655 04:05:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:19.655 04:05:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:19.655 04:05:34 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:19.655 04:05:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:19.655 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:19.655 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:19.655 altname enp24s0f0np0 00:14:19.655 altname ens785f0np0 00:14:19.655 inet 192.168.100.8/24 scope global mlx_0_0 00:14:19.655 valid_lft forever preferred_lft forever 00:14:19.655 04:05:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:19.655 04:05:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:19.655 04:05:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:19.655 04:05:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:19.655 04:05:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:19.655 04:05:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:19.655 04:05:34 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:19.655 04:05:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:19.655 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:19.655 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:19.655 altname enp24s0f1np1 00:14:19.655 altname ens785f1np1 00:14:19.655 inet 192.168.100.9/24 scope global mlx_0_1 00:14:19.655 valid_lft forever preferred_lft forever 00:14:19.655 04:05:34 -- nvmf/common.sh@411 -- # return 0 00:14:19.655 04:05:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:19.655 04:05:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:19.655 04:05:34 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:19.655 04:05:34 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:19.655 04:05:34 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:19.655 04:05:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:19.655 04:05:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:19.655 04:05:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:19.655 04:05:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:19.914 04:05:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:19.914 04:05:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:19.914 04:05:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.914 04:05:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:19.914 04:05:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:19.914 04:05:34 -- nvmf/common.sh@105 -- # continue 2 00:14:19.914 04:05:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:19.914 04:05:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.914 04:05:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:19.914 04:05:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:19.914 04:05:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:19.914 04:05:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:19.914 04:05:34 -- nvmf/common.sh@105 -- # continue 2 00:14:19.914 04:05:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:19.914 04:05:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:19.914 04:05:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:19.914 04:05:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:19.914 04:05:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:19.914 04:05:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:19.914 04:05:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:19.914 04:05:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:19.914 04:05:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:19.914 04:05:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:19.914 04:05:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:19.914 04:05:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:19.914 04:05:34 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:19.914 192.168.100.9' 00:14:19.914 04:05:34 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:19.914 192.168.100.9' 00:14:19.914 04:05:34 -- nvmf/common.sh@446 -- # head -n 1 00:14:19.914 04:05:34 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:19.914 04:05:34 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:19.914 192.168.100.9' 00:14:19.914 04:05:34 -- nvmf/common.sh@447 -- # tail -n +2 00:14:19.914 04:05:34 -- nvmf/common.sh@447 -- # head -n 1 00:14:19.914 04:05:34 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:19.914 04:05:34 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:19.914 04:05:34 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:19.914 04:05:34 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:19.914 04:05:34 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:19.914 04:05:34 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:19.914 04:05:34 -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:14:19.914 04:05:34 -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:14:19.914 04:05:34 -- target/device_removal.sh@237 -- # BOND_MASK=24 00:14:19.914 04:05:34 -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:14:19.914 04:05:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:19.914 04:05:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:19.914 04:05:34 -- common/autotest_common.sh@10 -- # set +x 00:14:19.914 ************************************ 00:14:19.914 START TEST nvmf_device_removal_pci_remove_no_srq 00:14:19.914 ************************************ 00:14:19.914 04:05:34 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan --no-srq 00:14:19.914 04:05:34 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:14:19.914 04:05:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:19.914 04:05:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:19.914 04:05:34 -- common/autotest_common.sh@10 -- # set +x 00:14:19.914 04:05:34 -- nvmf/common.sh@470 -- # nvmfpid=290652 00:14:19.914 04:05:34 -- nvmf/common.sh@471 -- # waitforlisten 290652 00:14:19.914 04:05:34 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:19.915 04:05:34 -- common/autotest_common.sh@817 -- # '[' -z 290652 ']' 00:14:19.915 04:05:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.915 04:05:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:19.915 04:05:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.915 04:05:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:19.915 04:05:34 -- common/autotest_common.sh@10 -- # set +x 00:14:20.173 [2024-04-19 04:05:34.458681] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:20.173 [2024-04-19 04:05:34.458724] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.173 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.173 [2024-04-19 04:05:34.512487] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:20.173 [2024-04-19 04:05:34.584076] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.173 [2024-04-19 04:05:34.584117] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.173 [2024-04-19 04:05:34.584123] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.173 [2024-04-19 04:05:34.584128] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.173 [2024-04-19 04:05:34.584132] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.173 [2024-04-19 04:05:34.584191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.173 [2024-04-19 04:05:34.584193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.738 04:05:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:20.738 04:05:35 -- common/autotest_common.sh@850 -- # return 0 00:14:20.738 04:05:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:20.738 04:05:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:20.738 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:20.738 04:05:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.738 04:05:35 -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:14:20.738 04:05:35 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:14:20.738 04:05:35 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:14:20.738 04:05:35 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:14:20.738 04:05:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.738 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:20.997 [2024-04-19 04:05:35.284347] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdfb060/0xdff550) succeed. 00:14:20.997 [2024-04-19 04:05:35.292244] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdfc560/0xe40be0) succeed. 00:14:20.997 04:05:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.997 04:05:35 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:14:20.997 04:05:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:20.997 04:05:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:20.997 04:05:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:20.997 04:05:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:20.997 04:05:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:20.997 04:05:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:20.997 04:05:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.997 04:05:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:20.997 04:05:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:20.997 04:05:35 -- nvmf/common.sh@105 -- # continue 2 00:14:20.997 04:05:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:20.997 04:05:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.997 04:05:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:20.997 04:05:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.997 04:05:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:20.997 04:05:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:20.997 04:05:35 -- nvmf/common.sh@105 -- # continue 2 00:14:20.997 04:05:35 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:14:20.997 04:05:35 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:14:20.997 04:05:35 -- target/device_removal.sh@25 -- # local -a dev_name 00:14:20.997 04:05:35 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:14:20.997 04:05:35 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:14:20.997 04:05:35 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:14:20.997 04:05:35 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:14:20.997 04:05:35 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:14:20.997 04:05:35 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:14:20.997 04:05:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:20.997 04:05:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:20.997 04:05:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:20.997 04:05:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:20.997 04:05:35 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:14:20.997 04:05:35 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:14:20.997 04:05:35 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:14:20.997 04:05:35 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:14:20.997 04:05:35 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:14:20.997 04:05:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.997 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:20.997 04:05:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.997 04:05:35 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:14:20.997 04:05:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.997 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:20.997 04:05:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.997 04:05:35 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:14:20.997 04:05:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.997 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:20.997 04:05:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.997 04:05:35 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:14:20.997 04:05:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.997 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:20.997 [2024-04-19 04:05:35.412063] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:20.997 04:05:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.997 04:05:35 -- target/device_removal.sh@41 -- # return 0 00:14:20.997 04:05:35 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:14:20.997 04:05:35 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:14:20.997 04:05:35 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:14:20.997 04:05:35 -- target/device_removal.sh@25 -- # local -a dev_name 00:14:20.997 04:05:35 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:14:20.997 04:05:35 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:14:20.997 04:05:35 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:14:20.997 04:05:35 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:14:20.997 04:05:35 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:14:20.997 04:05:35 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:14:20.997 04:05:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:20.997 04:05:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:20.997 04:05:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:20.997 04:05:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:20.997 04:05:35 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:14:20.997 04:05:35 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:14:20.997 04:05:35 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:14:20.997 04:05:35 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:14:20.997 04:05:35 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:14:20.997 04:05:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.997 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:20.997 04:05:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.997 04:05:35 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:14:20.997 04:05:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.997 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:20.997 04:05:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.997 04:05:35 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:14:20.997 04:05:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.997 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:20.997 04:05:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.997 04:05:35 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:14:20.997 04:05:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:20.997 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:20.997 [2024-04-19 04:05:35.490044] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:14:20.997 04:05:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:20.997 04:05:35 -- target/device_removal.sh@41 -- # return 0 00:14:20.997 04:05:35 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:14:20.997 04:05:35 -- target/device_removal.sh@53 -- # return 0 00:14:20.997 04:05:35 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:14:20.997 04:05:35 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:14:20.997 04:05:35 -- target/device_removal.sh@87 -- # local dev_names 00:14:20.997 04:05:35 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:20.997 04:05:35 -- target/device_removal.sh@91 -- # bdevperf_pid=290788 00:14:20.997 04:05:35 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:20.997 04:05:35 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:20.997 04:05:35 -- target/device_removal.sh@94 -- # waitforlisten 290788 /var/tmp/bdevperf.sock 00:14:20.997 04:05:35 -- common/autotest_common.sh@817 -- # '[' -z 290788 ']' 00:14:20.997 04:05:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.997 04:05:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:20.997 04:05:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.997 04:05:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:20.997 04:05:35 -- common/autotest_common.sh@10 -- # set +x 00:14:21.930 04:05:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:21.930 04:05:36 -- common/autotest_common.sh@850 -- # return 0 00:14:21.930 04:05:36 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:21.930 04:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.930 04:05:36 -- common/autotest_common.sh@10 -- # set +x 00:14:21.930 04:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.930 04:05:36 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:14:21.930 04:05:36 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:14:21.930 04:05:36 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:14:21.930 04:05:36 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:14:21.930 04:05:36 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:14:21.930 04:05:36 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:21.930 04:05:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:21.930 04:05:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.930 04:05:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.930 04:05:36 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:14:21.930 04:05:36 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:14:21.930 04:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.930 04:05:36 -- common/autotest_common.sh@10 -- # set +x 00:14:21.930 Nvme_mlx_0_0n1 00:14:21.930 04:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:21.930 04:05:36 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:14:21.931 04:05:36 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:14:21.931 04:05:36 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:14:21.931 04:05:36 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:14:21.931 04:05:36 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:14:21.931 04:05:36 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:21.931 04:05:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:21.931 04:05:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.931 04:05:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.931 04:05:36 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:14:21.931 04:05:36 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:14:21.931 04:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:21.931 04:05:36 -- common/autotest_common.sh@10 -- # set +x 00:14:22.189 Nvme_mlx_0_1n1 00:14:22.189 04:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:22.189 04:05:36 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=291056 00:14:22.189 04:05:36 -- target/device_removal.sh@112 -- # sleep 5 00:14:22.189 04:05:36 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:27.451 04:05:41 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:14:27.451 04:05:41 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:14:27.451 04:05:41 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:14:27.451 04:05:41 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:14:27.451 04:05:41 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:14:27.451 04:05:41 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:27.451 04:05:41 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:14:27.451 04:05:41 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/infiniband 00:14:27.451 04:05:41 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:14:27.451 04:05:41 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:14:27.451 04:05:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:27.451 04:05:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:27.451 04:05:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:27.451 04:05:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:27.451 04:05:41 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:14:27.451 04:05:41 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:14:27.451 04:05:41 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:27.451 04:05:41 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:14:27.451 04:05:41 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0 00:14:27.451 04:05:41 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:14:27.451 04:05:41 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:14:27.451 04:05:41 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:27.451 04:05:41 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:27.451 04:05:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.451 04:05:41 -- target/device_removal.sh@77 -- # grep mlx5_0 00:14:27.451 04:05:41 -- common/autotest_common.sh@10 -- # set +x 00:14:27.451 04:05:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.451 mlx5_0 00:14:27.451 04:05:41 -- target/device_removal.sh@78 -- # return 0 00:14:27.451 04:05:41 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:14:27.451 04:05:41 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:14:27.451 04:05:41 -- target/device_removal.sh@67 -- # echo 1 00:14:27.451 04:05:41 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:14:27.451 04:05:41 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:27.451 04:05:41 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:14:27.451 [2024-04-19 04:05:41.654666] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:14:27.451 [2024-04-19 04:05:41.654750] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:27.451 [2024-04-19 04:05:41.654846] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:27.451 [2024-04-19 04:05:41.654859] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 62 00:14:27.451 [2024-04-19 04:05:41.654865] rdma.c: 703:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:14:27.451 [2024-04-19 04:05:41.654871] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.451 [2024-04-19 04:05:41.654876] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.451 [2024-04-19 04:05:41.654881] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.451 [2024-04-19 04:05:41.654889] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.451 [2024-04-19 04:05:41.654896] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.451 [2024-04-19 04:05:41.654901] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.451 [2024-04-19 04:05:41.654906] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.451 [2024-04-19 04:05:41.654910] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.451 [2024-04-19 04:05:41.654916] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.451 [2024-04-19 04:05:41.654921] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.451 [2024-04-19 04:05:41.654925] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.451 [2024-04-19 04:05:41.654929] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.451 [2024-04-19 04:05:41.654933] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.451 [2024-04-19 04:05:41.654938] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.654942] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.654947] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.654951] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.654956] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.654961] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.654966] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.654970] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.654980] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.654985] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.654989] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.654994] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.654998] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655003] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655007] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655011] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655018] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655024] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655028] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655033] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655037] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655042] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655046] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655051] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655055] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655060] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655064] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655068] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655073] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655077] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:27.452 [2024-04-19 04:05:41.655082] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:27.452 [2024-04-19 04:05:41.655086] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655091] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655096] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655100] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655104] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655109] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655113] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655117] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655122] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655127] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655131] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655135] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655140] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655145] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655149] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655153] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655158] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655164] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655169] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:27.452 [2024-04-19 04:05:41.655175] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:27.452 [2024-04-19 04:05:41.655179] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655184] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655188] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655192] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655197] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655202] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655206] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:27.452 [2024-04-19 04:05:41.655211] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:27.452 [2024-04-19 04:05:41.655215] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655219] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655224] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655228] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655233] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655237] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655242] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655246] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655251] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655256] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655260] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655264] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655269] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655273] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655278] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655282] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655286] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655291] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655295] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655300] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655304] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655310] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655314] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655318] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655323] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655327] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655332] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655336] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655340] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655345] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655350] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655354] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655363] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655367] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655372] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655377] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655381] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655385] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655390] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655394] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655404] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655408] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655413] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655417] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655422] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655427] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655431] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655435] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655439] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655444] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:27.452 [2024-04-19 04:05:41.655448] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:27.452 [2024-04-19 04:05:41.655452] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:34.016 04:05:47 -- target/device_removal.sh@147 -- # seq 1 10 00:14:34.016 04:05:47 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:14:34.016 04:05:47 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:14:34.016 04:05:47 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:14:34.016 04:05:47 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:34.016 04:05:47 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:34.016 04:05:47 -- target/device_removal.sh@77 -- # grep mlx5_0 00:14:34.016 04:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.016 04:05:47 -- common/autotest_common.sh@10 -- # set +x 00:14:34.016 04:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.016 04:05:47 -- target/device_removal.sh@78 -- # return 1 00:14:34.016 04:05:47 -- target/device_removal.sh@149 -- # break 00:14:34.016 04:05:47 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:34.016 04:05:47 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:34.016 04:05:47 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:34.016 04:05:47 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:34.016 04:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.016 04:05:47 -- common/autotest_common.sh@10 -- # set +x 00:14:34.016 04:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.016 04:05:47 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:14:34.016 04:05:47 -- target/device_removal.sh@160 -- # rescan_pci 00:14:34.016 04:05:47 -- target/device_removal.sh@57 -- # echo 1 00:14:34.016 [2024-04-19 04:05:48.242344] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xffe520, err 11. Skip rescan. 00:14:34.273 [2024-04-19 04:05:48.603759] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdfb060/0xdff550) succeed. 00:14:34.273 [2024-04-19 04:05:48.603813] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:14:35.205 04:05:49 -- target/device_removal.sh@162 -- # seq 1 10 00:14:35.205 04:05:49 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:14:35.205 04:05:49 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/net 00:14:35.205 04:05:49 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:14:35.205 04:05:49 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:14:35.205 04:05:49 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:14:35.205 04:05:49 -- target/device_removal.sh@171 -- # break 00:14:35.205 04:05:49 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:14:35.205 04:05:49 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:14:37.110 04:05:51 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:14:37.110 04:05:51 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:37.111 04:05:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:37.111 04:05:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:37.111 04:05:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:37.111 04:05:51 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:14:37.111 04:05:51 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:14:37.111 04:05:51 -- target/device_removal.sh@186 -- # seq 1 10 00:14:37.111 04:05:51 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:14:37.111 04:05:51 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:37.111 04:05:51 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:37.111 04:05:51 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:37.111 04:05:51 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:37.111 04:05:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.111 04:05:51 -- common/autotest_common.sh@10 -- # set +x 00:14:37.111 [2024-04-19 04:05:51.615968] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:37.111 [2024-04-19 04:05:51.616001] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:14:37.111 [2024-04-19 04:05:51.616014] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:37.111 [2024-04-19 04:05:51.616024] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:37.111 04:05:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.370 04:05:51 -- target/device_removal.sh@187 -- # ib_count=2 00:14:37.370 04:05:51 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:14:37.370 04:05:51 -- target/device_removal.sh@189 -- # break 00:14:37.370 04:05:51 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:14:37.370 04:05:51 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:14:37.370 04:05:51 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:14:37.370 04:05:51 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:14:37.370 04:05:51 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:14:37.370 04:05:51 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:37.370 04:05:51 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:14:37.370 04:05:51 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/infiniband 00:14:37.370 04:05:51 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:14:37.370 04:05:51 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:14:37.370 04:05:51 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:37.370 04:05:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:37.370 04:05:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:37.370 04:05:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:37.370 04:05:51 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:14:37.371 04:05:51 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:14:37.371 04:05:51 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:37.371 04:05:51 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:14:37.371 04:05:51 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1 00:14:37.371 04:05:51 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:14:37.371 04:05:51 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:14:37.371 04:05:51 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:37.371 04:05:51 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:37.371 04:05:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.371 04:05:51 -- target/device_removal.sh@77 -- # grep mlx5_1 00:14:37.371 04:05:51 -- common/autotest_common.sh@10 -- # set +x 00:14:37.371 04:05:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.371 mlx5_1 00:14:37.371 04:05:51 -- target/device_removal.sh@78 -- # return 0 00:14:37.371 04:05:51 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:14:37.371 04:05:51 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:14:37.371 04:05:51 -- target/device_removal.sh@67 -- # echo 1 00:14:37.371 04:05:51 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:14:37.371 04:05:51 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:37.371 04:05:51 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:14:37.371 [2024-04-19 04:05:51.759007] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:14:37.371 [2024-04-19 04:05:51.761628] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:14:37.371 [2024-04-19 04:05:51.764837] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:37.371 [2024-04-19 04:05:51.764849] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:14:37.371 [2024-04-19 04:05:51.764854] rdma.c: 703:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:14:37.371 [2024-04-19 04:05:51.764859] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.764864] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.764868] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.764872] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.764877] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.764881] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.764885] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.764889] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.764893] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.764898] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.764902] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.764906] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.764911] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.764915] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.764920] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.764925] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.764929] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.764933] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.764938] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.764942] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.764946] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.764951] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.764955] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.764959] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.764963] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.764968] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.764972] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.764976] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.764980] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.764984] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.764992] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.764997] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765002] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765007] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765012] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765016] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765021] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765025] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765031] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765035] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765040] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765044] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765048] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765052] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765056] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765061] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765065] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765069] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765073] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765077] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765082] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765086] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765090] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765094] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765098] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765103] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765107] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765111] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765116] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765120] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765124] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765128] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765132] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765137] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765141] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765145] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765149] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765154] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765158] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765163] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765167] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765171] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765176] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765181] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765185] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765189] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765193] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765197] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765201] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765206] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765211] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765215] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765219] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765223] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765227] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765231] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765235] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765240] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765244] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.371 [2024-04-19 04:05:51.765248] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.371 [2024-04-19 04:05:51.765252] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765257] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.371 [2024-04-19 04:05:51.765261] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.371 [2024-04-19 04:05:51.765265] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765269] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765274] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765279] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765283] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765287] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765291] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765296] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765300] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765305] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765309] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765313] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765317] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765322] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765326] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765330] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765334] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765338] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765342] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765347] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765351] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765356] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765361] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765365] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765369] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765373] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765377] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765381] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765385] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765390] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765394] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765398] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765407] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765412] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765416] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765421] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765425] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765430] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765434] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765438] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765443] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765448] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765452] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765456] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765460] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765465] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765469] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765473] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765477] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765482] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765486] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765490] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765494] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765499] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765503] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765507] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765511] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765515] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765519] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765524] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765528] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765532] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765536] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765542] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765546] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765550] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765555] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765559] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765564] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765569] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765573] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765578] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765583] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765587] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765595] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765600] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765604] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765610] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765614] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765619] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765623] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765628] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765631] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765636] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765640] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765644] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765648] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765652] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765656] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765661] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765665] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765669] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765673] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:37.372 [2024-04-19 04:05:51.765678] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:14:37.372 [2024-04-19 04:05:51.765682] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:14:37.372 [2024-04-19 04:05:51.765686] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:14:37.372 [2024-04-19 04:05:51.765690] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:14:43.932 04:05:57 -- target/device_removal.sh@147 -- # seq 1 10 00:14:43.932 04:05:57 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:14:43.932 04:05:57 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:14:43.932 04:05:57 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:14:43.932 04:05:57 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:43.932 04:05:57 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:43.932 04:05:57 -- target/device_removal.sh@77 -- # grep mlx5_1 00:14:43.932 04:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.932 04:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.932 04:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.932 04:05:57 -- target/device_removal.sh@78 -- # return 1 00:14:43.932 04:05:57 -- target/device_removal.sh@149 -- # break 00:14:43.932 04:05:57 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:43.932 04:05:57 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:43.932 04:05:57 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:43.932 04:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.932 04:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.932 04:05:57 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:43.932 04:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.932 04:05:57 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:14:43.932 04:05:57 -- target/device_removal.sh@160 -- # rescan_pci 00:14:43.932 04:05:57 -- target/device_removal.sh@57 -- # echo 1 00:14:43.932 [2024-04-19 04:05:58.271491] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xdfe040, err 11. Skip rescan. 00:14:44.190 [2024-04-19 04:05:58.627624] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xde7e00/0xe40be0) succeed. 00:14:44.190 [2024-04-19 04:05:58.627689] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:14:45.122 04:05:59 -- target/device_removal.sh@162 -- # seq 1 10 00:14:45.122 04:05:59 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:14:45.122 04:05:59 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/net 00:14:45.122 04:05:59 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:14:45.122 04:05:59 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:14:45.122 04:05:59 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:14:45.122 04:05:59 -- target/device_removal.sh@171 -- # break 00:14:45.122 04:05:59 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:14:45.122 04:05:59 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:14:47.649 04:06:01 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:14:47.649 04:06:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:47.649 04:06:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:47.649 04:06:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:47.649 04:06:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:47.649 04:06:01 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:14:47.649 04:06:01 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:14:47.649 04:06:01 -- target/device_removal.sh@186 -- # seq 1 10 00:14:47.649 04:06:01 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:14:47.649 04:06:01 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:47.649 04:06:01 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:47.649 04:06:01 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:47.649 04:06:01 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:47.649 04:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:47.649 04:06:01 -- common/autotest_common.sh@10 -- # set +x 00:14:47.649 [2024-04-19 04:06:01.703760] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:14:47.649 [2024-04-19 04:06:01.703791] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:14:47.649 [2024-04-19 04:06:01.703805] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:47.649 [2024-04-19 04:06:01.703816] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:47.649 04:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:47.649 04:06:01 -- target/device_removal.sh@187 -- # ib_count=2 00:14:47.649 04:06:01 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:14:47.649 04:06:01 -- target/device_removal.sh@189 -- # break 00:14:47.649 04:06:01 -- target/device_removal.sh@200 -- # stop_bdevperf 00:14:47.649 04:06:01 -- target/device_removal.sh@116 -- # wait 291056 00:15:55.330 0 00:15:55.330 04:07:06 -- target/device_removal.sh@118 -- # killprocess 290788 00:15:55.331 04:07:06 -- common/autotest_common.sh@936 -- # '[' -z 290788 ']' 00:15:55.331 04:07:06 -- common/autotest_common.sh@940 -- # kill -0 290788 00:15:55.331 04:07:06 -- common/autotest_common.sh@941 -- # uname 00:15:55.331 04:07:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.331 04:07:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 290788 00:15:55.331 04:07:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:55.331 04:07:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:55.331 04:07:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 290788' 00:15:55.331 killing process with pid 290788 00:15:55.331 04:07:06 -- common/autotest_common.sh@955 -- # kill 290788 00:15:55.331 04:07:06 -- common/autotest_common.sh@960 -- # wait 290788 00:15:55.331 04:07:07 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:15:55.331 04:07:07 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:15:55.331 [2024-04-19 04:05:35.541242] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:15:55.331 [2024-04-19 04:05:35.541282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290788 ] 00:15:55.331 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.331 [2024-04-19 04:05:35.587428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.331 [2024-04-19 04:05:35.654137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.331 Running I/O for 90 seconds... 00:15:55.331 [2024-04-19 04:05:41.649547] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:55.331 [2024-04-19 04:05:41.649581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.331 [2024-04-19 04:05:41.649592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:15:55.331 [2024-04-19 04:05:41.649601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.331 [2024-04-19 04:05:41.649608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:15:55.331 [2024-04-19 04:05:41.649616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.331 [2024-04-19 04:05:41.649623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:15:55.331 [2024-04-19 04:05:41.649629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.331 [2024-04-19 04:05:41.649635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:15:55.331 [2024-04-19 04:05:41.651198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.331 [2024-04-19 04:05:41.651210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:55.331 [2024-04-19 04:05:41.651234] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:55.331 [2024-04-19 04:05:41.659544] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.669568] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.679595] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.689621] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.699646] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.709671] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.719695] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.729721] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.739745] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.749770] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.760169] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.770194] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.780253] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.790278] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.800549] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.810575] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.820600] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.831265] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.841671] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.852257] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.862280] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.872306] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.882392] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.892418] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.902444] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.912470] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.923209] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.934613] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.944637] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.954663] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.964688] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.974982] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.985009] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:41.995239] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.005263] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.015812] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.026048] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.036447] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.046474] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.056501] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.066528] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.076824] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.087257] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.097868] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.108307] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.118409] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.128539] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.138564] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.148589] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.158674] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.168700] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.179153] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.190585] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.200816] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.210975] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.221000] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.331 [2024-04-19 04:05:42.231149] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.241174] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.251203] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.261511] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.272185] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.282539] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.292784] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.302899] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.312926] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.322952] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.333198] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.343226] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.353502] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.363627] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.373654] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.383735] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.394243] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.404512] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.414794] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.424819] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.435058] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.445085] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.455219] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.465245] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.475271] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.485918] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.496222] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.506597] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.516725] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.526750] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.536775] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.546800] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.556827] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.566854] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.577607] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.587792] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.598111] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.608209] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.618265] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.628291] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.638503] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.648810] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.332 [2024-04-19 04:05:42.653638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:237544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x180c00 00:15:55.332 [2024-04-19 04:05:42.653655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:237552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x180c00 00:15:55.332 [2024-04-19 04:05:42.653679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:237560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x180c00 00:15:55.332 [2024-04-19 04:05:42.653693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:237568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:237576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:237584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:237592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:237600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:237608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:237616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:237624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:237632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:237640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:237648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:237656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:237664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:237672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:237680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:237688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:237696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.332 [2024-04-19 04:05:42.653917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.332 [2024-04-19 04:05:42.653924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:237704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.653929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.653937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:237712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.653943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.653950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:237720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.653956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.653963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:237728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.653969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.653976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:237736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.653982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.653989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:237744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.653996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:237752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:237760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:237768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:237776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:237784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:237792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:237800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:237808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:237816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:237824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:237832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:237840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:237848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:237856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:237864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:237872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:237880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:237888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:237896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:237904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:237912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:237920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:237928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:237936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:237944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:237952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:237960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:237968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:237976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:237984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:237992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.333 [2024-04-19 04:05:42.654407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.333 [2024-04-19 04:05:42.654414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:238000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:238008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:238016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:238024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:238032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:238040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:238048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:238056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:238064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:238072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:238080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:238088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:238096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:238104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:238112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:238120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:238128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:238136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:238144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:238152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:238160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:238168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:238176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:238184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:238192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:238200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:238208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:238216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:238224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:238232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:238240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:238248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:238256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:238264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:238272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:238280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:238288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:238296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:238304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.334 [2024-04-19 04:05:42.654915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.334 [2024-04-19 04:05:42.654922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:238312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.654928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.654935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:238320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.654940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.654949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:238328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.654955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.654962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:238336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.654968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.654975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:238344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.654981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.654988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:238352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.654994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:238360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:238368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:238376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:238384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:238392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:238400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:238408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:238416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:238424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:238432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:238440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:238448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:238456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:238464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:238472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:238480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:238488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:238496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:238504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:238512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:238520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:238528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:238536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.335 [2024-04-19 04:05:42.655289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.335 [2024-04-19 04:05:42.655296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:238544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.336 [2024-04-19 04:05:42.655302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.336 [2024-04-19 04:05:42.666958] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:15:55.336 [2024-04-19 04:05:42.667016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:55.336 [2024-04-19 04:05:42.667023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:55.336 [2024-04-19 04:05:42.667029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:238552 len:8 PRP1 0x0 PRP2 0x0 00:15:55.336 [2024-04-19 04:05:42.667035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.336 [2024-04-19 04:05:42.668443] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:55.336 [2024-04-19 04:05:42.668695] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:55.336 [2024-04-19 04:05:42.668706] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.336 [2024-04-19 04:05:42.668712] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:55.336 [2024-04-19 04:05:42.668725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.336 [2024-04-19 04:05:42.668732] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:55.336 [2024-04-19 04:05:42.668741] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:55.336 [2024-04-19 04:05:42.668747] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:55.336 [2024-04-19 04:05:42.668753] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:55.336 [2024-04-19 04:05:42.668770] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.336 [2024-04-19 04:05:42.668775] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:55.336 [2024-04-19 04:05:43.671229] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:55.336 [2024-04-19 04:05:43.671257] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.336 [2024-04-19 04:05:43.671263] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:55.336 [2024-04-19 04:05:43.671280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.336 [2024-04-19 04:05:43.671288] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:55.336 [2024-04-19 04:05:43.671297] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:55.336 [2024-04-19 04:05:43.671306] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:55.336 [2024-04-19 04:05:43.671313] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:55.336 [2024-04-19 04:05:43.671648] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.336 [2024-04-19 04:05:43.671657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:55.336 [2024-04-19 04:05:44.674112] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:55.336 [2024-04-19 04:05:44.674140] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.336 [2024-04-19 04:05:44.674146] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:55.336 [2024-04-19 04:05:44.674162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.336 [2024-04-19 04:05:44.674168] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:55.336 [2024-04-19 04:05:44.674177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:55.336 [2024-04-19 04:05:44.674183] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:55.336 [2024-04-19 04:05:44.674189] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:55.336 [2024-04-19 04:05:44.674207] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.336 [2024-04-19 04:05:44.674214] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:55.336 [2024-04-19 04:05:46.681180] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.336 [2024-04-19 04:05:46.681209] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:55.336 [2024-04-19 04:05:46.681228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.336 [2024-04-19 04:05:46.681235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:55.336 [2024-04-19 04:05:46.681245] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:55.336 [2024-04-19 04:05:46.681251] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:55.336 [2024-04-19 04:05:46.681257] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:55.336 [2024-04-19 04:05:46.681274] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.336 [2024-04-19 04:05:46.681281] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:55.336 [2024-04-19 04:05:48.686098] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.336 [2024-04-19 04:05:48.686120] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:55.336 [2024-04-19 04:05:48.686139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.336 [2024-04-19 04:05:48.686147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:55.336 [2024-04-19 04:05:48.686157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:55.336 [2024-04-19 04:05:48.686163] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:55.336 [2024-04-19 04:05:48.686173] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:55.336 [2024-04-19 04:05:48.686190] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.336 [2024-04-19 04:05:48.686197] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:55.336 [2024-04-19 04:05:50.691018] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.336 [2024-04-19 04:05:50.691050] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:55.336 [2024-04-19 04:05:50.691069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.336 [2024-04-19 04:05:50.691076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:55.336 [2024-04-19 04:05:50.691087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:55.336 [2024-04-19 04:05:50.691093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:55.336 [2024-04-19 04:05:50.691099] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:55.336 [2024-04-19 04:05:50.691117] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.336 [2024-04-19 04:05:50.691124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:55.336 [2024-04-19 04:05:51.741490] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:55.336 [2024-04-19 04:05:51.762928] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:55.336 [2024-04-19 04:05:51.762948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.336 [2024-04-19 04:05:51.762956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:15:55.336 [2024-04-19 04:05:51.762963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.336 [2024-04-19 04:05:51.762970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:15:55.336 [2024-04-19 04:05:51.762976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.336 [2024-04-19 04:05:51.762982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:15:55.336 [2024-04-19 04:05:51.762988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.336 [2024-04-19 04:05:51.762994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:15:55.336 [2024-04-19 04:05:51.764995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.336 [2024-04-19 04:05:51.765006] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:55.336 [2024-04-19 04:05:51.765023] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:55.336 [2024-04-19 04:05:51.772939] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.782963] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.792989] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.803013] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.813039] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.823064] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.833089] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.843115] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.853141] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.863167] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.873191] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.883217] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.336 [2024-04-19 04:05:51.893242] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:51.903269] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:51.913295] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:51.923319] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:51.933344] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:51.943371] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:51.953395] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:51.963422] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:51.973448] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:51.983475] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:51.993502] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.003528] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.013554] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.023579] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.033606] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.043632] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.053658] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.063683] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.073708] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.083733] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.093760] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.103787] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.113812] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.123837] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.133861] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.143886] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.153913] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.163937] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.173964] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.183991] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.194017] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.204044] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.214070] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.224096] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.234122] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.244147] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.254171] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.264197] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.274224] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.284251] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.294276] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.304300] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.314325] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.324350] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.334374] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.344410] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.354426] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.364450] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.374477] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.384502] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.394528] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.404554] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.414581] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.424605] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.434632] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.444659] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.454683] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.464707] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.474732] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.484758] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.494782] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.504809] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.514834] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.524860] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.534884] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.544911] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.554937] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.564962] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.574989] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.585014] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.595039] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.605064] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.615091] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.625118] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.635143] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.645167] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.655193] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.665218] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.675244] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.685268] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.695295] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.705384] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.715560] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.726299] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.738662] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.748909] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.759129] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:55.337 [2024-04-19 04:05:52.767882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.337 [2024-04-19 04:05:52.767899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.337 [2024-04-19 04:05:52.767912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.337 [2024-04-19 04:05:52.767918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.337 [2024-04-19 04:05:52.767925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.767931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.767938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.767944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.767951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.767956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.767964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.767970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.767977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.767983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.767990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.767996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.338 [2024-04-19 04:05:52.768425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.338 [2024-04-19 04:05:52.768432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:55.339 [2024-04-19 04:05:52.768821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1be800 00:15:55.339 [2024-04-19 04:05:52.768834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1be800 00:15:55.339 [2024-04-19 04:05:52.768847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1be800 00:15:55.339 [2024-04-19 04:05:52.768861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1be800 00:15:55.339 [2024-04-19 04:05:52.768874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1be800 00:15:55.339 [2024-04-19 04:05:52.768886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1be800 00:15:55.339 [2024-04-19 04:05:52.768899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1be800 00:15:55.339 [2024-04-19 04:05:52.768912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.339 [2024-04-19 04:05:52.768919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1be800 00:15:55.339 [2024-04-19 04:05:52.768925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.768932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.768939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.768946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.768952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.768960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.768966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.768973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.768979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.768986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.768992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.768999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x1be800 00:15:55.340 [2024-04-19 04:05:52.769294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.340 [2024-04-19 04:05:52.769301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007948000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794a000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794c000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794e000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007950000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007952000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007954000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007956000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007958000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.769548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x1be800 00:15:55.341 [2024-04-19 04:05:52.769555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:1e0425b0 sqhd:2530 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.781244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:55.341 [2024-04-19 04:05:52.781256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:55.341 [2024-04-19 04:05:52.781261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121272 len:8 PRP1 0x0 PRP2 0x0 00:15:55.341 [2024-04-19 04:05:52.781268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.341 [2024-04-19 04:05:52.781309] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:55.341 [2024-04-19 04:05:52.783899] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:55.341 [2024-04-19 04:05:52.783914] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.341 [2024-04-19 04:05:52.783919] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:55.341 [2024-04-19 04:05:52.783932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.341 [2024-04-19 04:05:52.783939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:55.341 [2024-04-19 04:05:52.784149] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:55.341 [2024-04-19 04:05:52.784157] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:55.341 [2024-04-19 04:05:52.784164] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:55.341 [2024-04-19 04:05:52.784180] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.341 [2024-04-19 04:05:52.784188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:55.341 [2024-04-19 04:05:53.787323] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:55.341 [2024-04-19 04:05:53.787353] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.341 [2024-04-19 04:05:53.787358] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:55.341 [2024-04-19 04:05:53.787373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.341 [2024-04-19 04:05:53.787379] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:55.341 [2024-04-19 04:05:53.787390] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:55.341 [2024-04-19 04:05:53.787396] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:55.341 [2024-04-19 04:05:53.787407] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:55.341 [2024-04-19 04:05:53.787424] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.341 [2024-04-19 04:05:53.787431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:55.341 [2024-04-19 04:05:54.792410] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:55.341 [2024-04-19 04:05:54.792441] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.341 [2024-04-19 04:05:54.792447] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:55.341 [2024-04-19 04:05:54.792470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.341 [2024-04-19 04:05:54.792477] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:55.341 [2024-04-19 04:05:54.792521] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:55.341 [2024-04-19 04:05:54.792529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:55.341 [2024-04-19 04:05:54.792536] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:55.341 [2024-04-19 04:05:54.792565] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.341 [2024-04-19 04:05:54.792571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:55.341 [2024-04-19 04:05:56.798265] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.341 [2024-04-19 04:05:56.798296] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:55.341 [2024-04-19 04:05:56.798317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.341 [2024-04-19 04:05:56.798324] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:55.341 [2024-04-19 04:05:56.798335] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:55.341 [2024-04-19 04:05:56.798341] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:55.341 [2024-04-19 04:05:56.798348] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:55.341 [2024-04-19 04:05:56.798373] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.341 [2024-04-19 04:05:56.798380] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:55.341 [2024-04-19 04:05:58.803881] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.342 [2024-04-19 04:05:58.803906] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:55.342 [2024-04-19 04:05:58.803926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.342 [2024-04-19 04:05:58.803933] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:55.342 [2024-04-19 04:05:58.803943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:55.342 [2024-04-19 04:05:58.803949] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:55.342 [2024-04-19 04:05:58.803956] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:55.342 [2024-04-19 04:05:58.803975] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.342 [2024-04-19 04:05:58.803981] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:55.342 [2024-04-19 04:06:00.810724] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:55.342 [2024-04-19 04:06:00.810758] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:55.342 [2024-04-19 04:06:00.810779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:55.342 [2024-04-19 04:06:00.810786] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:55.342 [2024-04-19 04:06:00.810797] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:55.342 [2024-04-19 04:06:00.810807] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:55.342 [2024-04-19 04:06:00.810814] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:55.342 [2024-04-19 04:06:00.810840] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:55.342 [2024-04-19 04:06:00.810846] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:55.342 [2024-04-19 04:06:01.866716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:55.342 00:15:55.342 Latency(us) 00:15:55.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.342 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:55.342 Verification LBA range: start 0x0 length 0x8000 00:15:55.342 Nvme_mlx_0_0n1 : 90.01 11962.74 46.73 0.00 0.00 10679.78 1796.17 11035679.86 00:15:55.342 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:55.342 Verification LBA range: start 0x0 length 0x8000 00:15:55.342 Nvme_mlx_0_1n1 : 90.01 10573.86 41.30 0.00 0.00 12083.53 1517.04 11035679.86 00:15:55.342 =================================================================================================================== 00:15:55.342 Total : 22536.60 88.03 0.00 0.00 11338.40 1517.04 11035679.86 00:15:55.342 Received shutdown signal, test time was about 90.000000 seconds 00:15:55.342 00:15:55.342 Latency(us) 00:15:55.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.342 =================================================================================================================== 00:15:55.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:55.342 04:07:07 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:15:55.342 04:07:07 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:15:55.342 04:07:07 -- target/device_removal.sh@202 -- # killprocess 290652 00:15:55.342 04:07:07 -- common/autotest_common.sh@936 -- # '[' -z 290652 ']' 00:15:55.342 04:07:07 -- common/autotest_common.sh@940 -- # kill -0 290652 00:15:55.342 04:07:07 -- common/autotest_common.sh@941 -- # uname 00:15:55.342 04:07:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.342 04:07:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 290652 00:15:55.342 04:07:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:55.342 04:07:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:55.342 04:07:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 290652' 00:15:55.342 killing process with pid 290652 00:15:55.342 04:07:07 -- common/autotest_common.sh@955 -- # kill 290652 00:15:55.342 04:07:07 -- common/autotest_common.sh@960 -- # wait 290652 00:15:55.342 04:07:07 -- target/device_removal.sh@203 -- # nvmfpid= 00:15:55.342 04:07:07 -- target/device_removal.sh@205 -- # return 0 00:15:55.342 00:15:55.342 real 1m33.040s 00:15:55.342 user 4m25.865s 00:15:55.342 sys 0m5.541s 00:15:55.342 04:07:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:55.342 04:07:07 -- common/autotest_common.sh@10 -- # set +x 00:15:55.342 ************************************ 00:15:55.342 END TEST nvmf_device_removal_pci_remove_no_srq 00:15:55.342 ************************************ 00:15:55.342 04:07:07 -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:15:55.342 04:07:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:55.342 04:07:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:55.342 04:07:07 -- common/autotest_common.sh@10 -- # set +x 00:15:55.342 ************************************ 00:15:55.342 START TEST nvmf_device_removal_pci_remove 00:15:55.342 ************************************ 00:15:55.342 04:07:07 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan 00:15:55.342 04:07:07 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:15:55.342 04:07:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:55.342 04:07:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:55.342 04:07:07 -- common/autotest_common.sh@10 -- # set +x 00:15:55.342 04:07:07 -- nvmf/common.sh@470 -- # nvmfpid=308175 00:15:55.342 04:07:07 -- nvmf/common.sh@471 -- # waitforlisten 308175 00:15:55.342 04:07:07 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:55.342 04:07:07 -- common/autotest_common.sh@817 -- # '[' -z 308175 ']' 00:15:55.342 04:07:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.342 04:07:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:55.342 04:07:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.342 04:07:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:55.342 04:07:07 -- common/autotest_common.sh@10 -- # set +x 00:15:55.342 [2024-04-19 04:07:07.636323] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:15:55.342 [2024-04-19 04:07:07.636362] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.342 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.342 [2024-04-19 04:07:07.681462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:55.342 [2024-04-19 04:07:07.748938] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.342 [2024-04-19 04:07:07.748977] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.342 [2024-04-19 04:07:07.748983] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.342 [2024-04-19 04:07:07.748989] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.342 [2024-04-19 04:07:07.748993] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.342 [2024-04-19 04:07:07.749082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.342 [2024-04-19 04:07:07.749083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.342 04:07:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:55.342 04:07:08 -- common/autotest_common.sh@850 -- # return 0 00:15:55.342 04:07:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:55.342 04:07:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:55.342 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.342 04:07:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.342 04:07:08 -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:15:55.342 04:07:08 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:15:55.342 04:07:08 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:15:55.342 04:07:08 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:55.342 04:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.342 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.342 [2024-04-19 04:07:08.465870] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa1c060/0xa20550) succeed. 00:15:55.342 [2024-04-19 04:07:08.473828] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa1d560/0xa61be0) succeed. 00:15:55.342 04:07:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.342 04:07:08 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:15:55.342 04:07:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:55.342 04:07:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:55.342 04:07:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:55.342 04:07:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:55.342 04:07:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:55.342 04:07:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:55.342 04:07:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.342 04:07:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:55.342 04:07:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:55.342 04:07:08 -- nvmf/common.sh@105 -- # continue 2 00:15:55.342 04:07:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:55.342 04:07:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.342 04:07:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:55.342 04:07:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.342 04:07:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:55.342 04:07:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:55.342 04:07:08 -- nvmf/common.sh@105 -- # continue 2 00:15:55.342 04:07:08 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:15:55.343 04:07:08 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:15:55.343 04:07:08 -- target/device_removal.sh@25 -- # local -a dev_name 00:15:55.343 04:07:08 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:15:55.343 04:07:08 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:15:55.343 04:07:08 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:15:55.343 04:07:08 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:15:55.343 04:07:08 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:15:55.343 04:07:08 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:15:55.343 04:07:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:55.343 04:07:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:55.343 04:07:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:55.343 04:07:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:55.343 04:07:08 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:15:55.343 04:07:08 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:15:55.343 04:07:08 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:15:55.343 04:07:08 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:15:55.343 04:07:08 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:15:55.343 04:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.343 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 04:07:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.343 04:07:08 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:15:55.343 04:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.343 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 04:07:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.343 04:07:08 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:15:55.343 04:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.343 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 04:07:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.343 04:07:08 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:15:55.343 04:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.343 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 [2024-04-19 04:07:08.648499] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:55.343 04:07:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.343 04:07:08 -- target/device_removal.sh@41 -- # return 0 00:15:55.343 04:07:08 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:15:55.343 04:07:08 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:15:55.343 04:07:08 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:15:55.343 04:07:08 -- target/device_removal.sh@25 -- # local -a dev_name 00:15:55.343 04:07:08 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:15:55.343 04:07:08 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:15:55.343 04:07:08 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:15:55.343 04:07:08 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:15:55.343 04:07:08 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:15:55.343 04:07:08 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:15:55.343 04:07:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:55.343 04:07:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:55.343 04:07:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:55.343 04:07:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:55.343 04:07:08 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:15:55.343 04:07:08 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:15:55.343 04:07:08 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:15:55.343 04:07:08 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:15:55.343 04:07:08 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:15:55.343 04:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.343 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 04:07:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.343 04:07:08 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:15:55.343 04:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.343 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 04:07:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.343 04:07:08 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:15:55.343 04:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.343 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 04:07:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.343 04:07:08 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:15:55.343 04:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.343 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 [2024-04-19 04:07:08.726451] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:15:55.343 04:07:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.343 04:07:08 -- target/device_removal.sh@41 -- # return 0 00:15:55.343 04:07:08 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:15:55.343 04:07:08 -- target/device_removal.sh@53 -- # return 0 00:15:55.343 04:07:08 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:15:55.343 04:07:08 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:15:55.343 04:07:08 -- target/device_removal.sh@87 -- # local dev_names 00:15:55.343 04:07:08 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:55.343 04:07:08 -- target/device_removal.sh@91 -- # bdevperf_pid=308480 00:15:55.343 04:07:08 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:55.343 04:07:08 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:55.343 04:07:08 -- target/device_removal.sh@94 -- # waitforlisten 308480 /var/tmp/bdevperf.sock 00:15:55.343 04:07:08 -- common/autotest_common.sh@817 -- # '[' -z 308480 ']' 00:15:55.343 04:07:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.343 04:07:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:55.343 04:07:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.343 04:07:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:55.343 04:07:08 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 04:07:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:55.343 04:07:09 -- common/autotest_common.sh@850 -- # return 0 00:15:55.343 04:07:09 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:55.343 04:07:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.343 04:07:09 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 04:07:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.343 04:07:09 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:15:55.343 04:07:09 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:15:55.343 04:07:09 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:15:55.343 04:07:09 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:15:55.343 04:07:09 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:15:55.343 04:07:09 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:55.343 04:07:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:55.343 04:07:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:55.343 04:07:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:55.343 04:07:09 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:15:55.343 04:07:09 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:15:55.343 04:07:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.343 04:07:09 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 Nvme_mlx_0_0n1 00:15:55.343 04:07:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.343 04:07:09 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:15:55.343 04:07:09 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:15:55.343 04:07:09 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:15:55.343 04:07:09 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:15:55.343 04:07:09 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:15:55.343 04:07:09 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:55.343 04:07:09 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:55.343 04:07:09 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:55.343 04:07:09 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:55.343 04:07:09 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:15:55.343 04:07:09 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:15:55.343 04:07:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.343 04:07:09 -- common/autotest_common.sh@10 -- # set +x 00:15:55.343 Nvme_mlx_0_1n1 00:15:55.343 04:07:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.343 04:07:09 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=308716 00:15:55.343 04:07:09 -- target/device_removal.sh@112 -- # sleep 5 00:15:55.343 04:07:09 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:00.610 04:07:14 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:16:00.610 04:07:14 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:16:00.610 04:07:14 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:16:00.610 04:07:14 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:16:00.610 04:07:14 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:16:00.610 04:07:14 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:16:00.610 04:07:14 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:16:00.610 04:07:14 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/infiniband 00:16:00.610 04:07:14 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:16:00.610 04:07:14 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:16:00.610 04:07:14 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:00.610 04:07:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:00.610 04:07:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:00.610 04:07:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:00.610 04:07:14 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:16:00.610 04:07:14 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:16:00.610 04:07:14 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:16:00.610 04:07:14 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:16:00.610 04:07:14 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0 00:16:00.610 04:07:14 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:16:00.610 04:07:14 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:16:00.610 04:07:14 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:00.610 04:07:14 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:16:00.610 04:07:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:00.610 04:07:14 -- target/device_removal.sh@77 -- # grep mlx5_0 00:16:00.610 04:07:14 -- common/autotest_common.sh@10 -- # set +x 00:16:00.610 04:07:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:00.610 mlx5_0 00:16:00.610 04:07:14 -- target/device_removal.sh@78 -- # return 0 00:16:00.610 04:07:14 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:16:00.610 04:07:14 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:16:00.610 04:07:14 -- target/device_removal.sh@67 -- # echo 1 00:16:00.610 04:07:14 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:16:00.610 04:07:14 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:16:00.610 04:07:14 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:16:00.610 [2024-04-19 04:07:14.888020] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:16:00.610 [2024-04-19 04:07:14.888292] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:00.610 [2024-04-19 04:07:14.893017] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:00.610 [2024-04-19 04:07:14.893037] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 96 00:16:07.170 04:07:20 -- target/device_removal.sh@147 -- # seq 1 10 00:16:07.170 04:07:20 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:16:07.170 04:07:20 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:16:07.170 04:07:20 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:16:07.170 04:07:20 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:07.170 04:07:20 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:16:07.170 04:07:20 -- target/device_removal.sh@77 -- # grep mlx5_0 00:16:07.170 04:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.170 04:07:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.170 04:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.170 04:07:20 -- target/device_removal.sh@78 -- # return 1 00:16:07.170 04:07:20 -- target/device_removal.sh@149 -- # break 00:16:07.170 04:07:20 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:16:07.170 04:07:20 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:16:07.170 04:07:20 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:16:07.170 04:07:20 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:16:07.170 04:07:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:07.170 04:07:20 -- common/autotest_common.sh@10 -- # set +x 00:16:07.170 04:07:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:07.170 04:07:20 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:16:07.170 04:07:20 -- target/device_removal.sh@160 -- # rescan_pci 00:16:07.170 04:07:20 -- target/device_removal.sh@57 -- # echo 1 00:16:07.170 [2024-04-19 04:07:21.405914] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xaf9040, err 11. Skip rescan. 00:16:07.428 [2024-04-19 04:07:21.779861] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa1ecf0/0xa20550) succeed. 00:16:07.428 [2024-04-19 04:07:21.779915] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:16:08.361 04:07:22 -- target/device_removal.sh@162 -- # seq 1 10 00:16:08.362 04:07:22 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:16:08.362 04:07:22 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/net 00:16:08.362 04:07:22 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:16:08.362 04:07:22 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:16:08.362 04:07:22 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:16:08.362 04:07:22 -- target/device_removal.sh@171 -- # break 00:16:08.362 04:07:22 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:16:08.362 04:07:22 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:16:10.259 04:07:24 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:16:10.260 04:07:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:10.260 04:07:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:10.260 04:07:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:10.260 04:07:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:10.260 04:07:24 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:16:10.260 04:07:24 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:16:10.260 04:07:24 -- target/device_removal.sh@186 -- # seq 1 10 00:16:10.260 04:07:24 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:16:10.260 04:07:24 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:16:10.260 04:07:24 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:16:10.260 04:07:24 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:16:10.260 04:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:10.260 04:07:24 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:16:10.260 04:07:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.260 [2024-04-19 04:07:24.776008] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:10.260 [2024-04-19 04:07:24.776038] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:16:10.260 [2024-04-19 04:07:24.776051] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:16:10.260 [2024-04-19 04:07:24.776062] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:16:10.518 04:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:10.518 04:07:24 -- target/device_removal.sh@187 -- # ib_count=2 00:16:10.518 04:07:24 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:16:10.518 04:07:24 -- target/device_removal.sh@189 -- # break 00:16:10.518 04:07:24 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:16:10.518 04:07:24 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:16:10.518 04:07:24 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:16:10.518 04:07:24 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:16:10.518 04:07:24 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:16:10.518 04:07:24 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:16:10.518 04:07:24 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:16:10.518 04:07:24 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/infiniband 00:16:10.518 04:07:24 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:16:10.518 04:07:24 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:16:10.518 04:07:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:10.518 04:07:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:10.518 04:07:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:10.518 04:07:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:10.518 04:07:24 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:16:10.518 04:07:24 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:16:10.518 04:07:24 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:16:10.518 04:07:24 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:16:10.518 04:07:24 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1 00:16:10.518 04:07:24 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:16:10.518 04:07:24 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:16:10.518 04:07:24 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:10.518 04:07:24 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:16:10.518 04:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:10.518 04:07:24 -- target/device_removal.sh@77 -- # grep mlx5_1 00:16:10.518 04:07:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.518 04:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:10.518 mlx5_1 00:16:10.518 04:07:24 -- target/device_removal.sh@78 -- # return 0 00:16:10.518 04:07:24 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:16:10.518 04:07:24 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:16:10.518 04:07:24 -- target/device_removal.sh@67 -- # echo 1 00:16:10.518 04:07:24 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:16:10.518 04:07:24 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:16:10.518 04:07:24 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:16:10.518 [2024-04-19 04:07:24.929615] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:16:10.518 [2024-04-19 04:07:24.929688] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:10.518 [2024-04-19 04:07:24.938230] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:16:10.518 [2024-04-19 04:07:24.938245] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 93 00:16:17.077 04:07:30 -- target/device_removal.sh@147 -- # seq 1 10 00:16:17.077 04:07:30 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:16:17.077 04:07:30 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:16:17.077 04:07:30 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:16:17.077 04:07:30 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:17.077 04:07:30 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:16:17.077 04:07:30 -- target/device_removal.sh@77 -- # grep mlx5_1 00:16:17.077 04:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.077 04:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:17.077 04:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.077 04:07:30 -- target/device_removal.sh@78 -- # return 1 00:16:17.077 04:07:30 -- target/device_removal.sh@149 -- # break 00:16:17.077 04:07:30 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:16:17.077 04:07:30 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:16:17.077 04:07:30 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:16:17.077 04:07:30 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:16:17.077 04:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.077 04:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:17.077 04:07:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.077 04:07:30 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:16:17.077 04:07:30 -- target/device_removal.sh@160 -- # rescan_pci 00:16:17.077 04:07:30 -- target/device_removal.sh@57 -- # echo 1 00:16:18.012 [2024-04-19 04:07:32.368841] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xa07a80, err 11. Skip rescan. 00:16:18.270 [2024-04-19 04:07:32.720520] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa1f370/0xa61be0) succeed. 00:16:18.270 [2024-04-19 04:07:32.720585] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:16:19.202 04:07:33 -- target/device_removal.sh@162 -- # seq 1 10 00:16:19.202 04:07:33 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:16:19.202 04:07:33 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/net 00:16:19.202 04:07:33 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:16:19.202 04:07:33 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:16:19.202 04:07:33 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:16:19.202 04:07:33 -- target/device_removal.sh@171 -- # break 00:16:19.202 04:07:33 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:16:19.202 04:07:33 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:16:19.202 04:07:33 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:16:19.202 04:07:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:19.202 04:07:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:19.202 04:07:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:19.202 04:07:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:19.202 04:07:33 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:16:19.202 04:07:33 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:16:19.460 04:07:33 -- target/device_removal.sh@186 -- # seq 1 10 00:16:19.460 04:07:33 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:16:19.460 04:07:33 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:16:19.460 04:07:33 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:16:19.460 04:07:33 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:16:19.460 04:07:33 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:16:19.460 04:07:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.460 04:07:33 -- common/autotest_common.sh@10 -- # set +x 00:16:19.460 [2024-04-19 04:07:33.845006] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:16:19.460 [2024-04-19 04:07:33.845043] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:16:19.460 [2024-04-19 04:07:33.845057] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:16:19.460 [2024-04-19 04:07:33.845068] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:16:19.460 04:07:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.460 04:07:33 -- target/device_removal.sh@187 -- # ib_count=2 00:16:19.460 04:07:33 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:16:19.460 04:07:33 -- target/device_removal.sh@189 -- # break 00:16:19.460 04:07:33 -- target/device_removal.sh@200 -- # stop_bdevperf 00:16:19.460 04:07:33 -- target/device_removal.sh@116 -- # wait 308716 00:17:27.141 0 00:17:27.141 04:08:40 -- target/device_removal.sh@118 -- # killprocess 308480 00:17:27.141 04:08:40 -- common/autotest_common.sh@936 -- # '[' -z 308480 ']' 00:17:27.141 04:08:40 -- common/autotest_common.sh@940 -- # kill -0 308480 00:17:27.141 04:08:40 -- common/autotest_common.sh@941 -- # uname 00:17:27.141 04:08:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:27.141 04:08:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 308480 00:17:27.141 04:08:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:27.141 04:08:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:27.141 04:08:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 308480' 00:17:27.141 killing process with pid 308480 00:17:27.141 04:08:40 -- common/autotest_common.sh@955 -- # kill 308480 00:17:27.141 04:08:40 -- common/autotest_common.sh@960 -- # wait 308480 00:17:27.141 04:08:40 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:17:27.141 04:08:40 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:17:27.141 [2024-04-19 04:07:08.779753] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:27.141 [2024-04-19 04:07:08.779803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308480 ] 00:17:27.141 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.141 [2024-04-19 04:07:08.826857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.141 [2024-04-19 04:07:08.891214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.141 Running I/O for 90 seconds... 00:17:27.141 [2024-04-19 04:07:14.888801] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:27.141 [2024-04-19 04:07:14.888834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.141 [2024-04-19 04:07:14.888844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32597 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:17:27.141 [2024-04-19 04:07:14.888852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.141 [2024-04-19 04:07:14.888858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32597 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:17:27.141 [2024-04-19 04:07:14.888865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.141 [2024-04-19 04:07:14.888871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32597 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:17:27.141 [2024-04-19 04:07:14.888878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.141 [2024-04-19 04:07:14.888884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32597 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:17:27.141 [2024-04-19 04:07:14.890581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.141 [2024-04-19 04:07:14.890595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:27.141 [2024-04-19 04:07:14.890623] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:27.141 [2024-04-19 04:07:14.898796] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:14.908818] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:14.918843] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:14.928871] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:14.938897] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:14.948921] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:14.958945] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:14.968972] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:14.978999] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:14.989024] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:14.999049] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.009284] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.019310] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.029831] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.039857] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.050146] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.060174] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.070550] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.080819] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.090983] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.101010] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.111248] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.121275] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.131299] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.141439] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.151519] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.161546] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.172228] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.182622] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.193043] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.203405] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.213455] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.223480] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.233632] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.243658] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.253880] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.263905] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.274117] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.284143] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.294862] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.305440] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.315973] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.326522] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.336652] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.346678] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.356860] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.367054] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.377375] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.387431] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.397456] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.408010] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.418715] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.429199] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.439594] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.449761] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.459786] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.470041] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.480067] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.490094] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.141 [2024-04-19 04:07:15.500119] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.510146] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.520501] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.530528] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.540989] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.551417] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.561872] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.572218] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.582256] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.592283] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.602332] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.612362] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.622387] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.632413] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.642440] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.652791] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.662815] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.673224] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.683777] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.694317] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.704960] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.714986] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.725012] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.735156] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.745181] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.755206] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.765363] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.775390] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.785776] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.796353] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.806733] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.817161] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.827484] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.837511] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.847536] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.857576] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.867604] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.877628] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.888003] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.142 [2024-04-19 04:07:15.893026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:246448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:246456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:246464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:246472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:246480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:246488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:246496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:246504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:246512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:246520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:246528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:246536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:246544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:246552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:246560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:246568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:246576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:246584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:246592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:246600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:246608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:246616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.142 [2024-04-19 04:07:15.893332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.142 [2024-04-19 04:07:15.893339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:246624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:246632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:246640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:246648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:246656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:246664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:246672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:246680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:246688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:246696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:246704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:246712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:246720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:246728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:246736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:246744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:246752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:246760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:246768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:246776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.143 [2024-04-19 04:07:15.893597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:245760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:245768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:245776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:245784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:245792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:245800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:245808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:245816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:245824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:245832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:245840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:245848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:245856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:245864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:245872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:245880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:245888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.143 [2024-04-19 04:07:15.893832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:245896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x1810ef 00:17:27.143 [2024-04-19 04:07:15.893838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:245904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:245912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:245920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:245928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:245936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:245944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:245952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:245960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:245968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:245976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:245984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.893993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:245992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.893998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:246000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:246008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:246016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:246024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:246032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:246040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:246048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:246056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:246064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:246072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:246080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:246088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:246096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:246104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:246112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:246120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:246128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:246136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:246144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:246152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:246160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:246168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:246176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x1810ef 00:17:27.144 [2024-04-19 04:07:15.894308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.144 [2024-04-19 04:07:15.894316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:246184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:246192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:246200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:246208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:246216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:246224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:246232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:246240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:246248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:246256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:246264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:246272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:246280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:246288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:246296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:246304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:246312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:246320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:246328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:246336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:246344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:246352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:246360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:246368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:246376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:246384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:246392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:246400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x1810ef 00:17:27.145 [2024-04-19 04:07:15.894690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.145 [2024-04-19 04:07:15.894697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:246408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x1810ef 00:17:27.146 [2024-04-19 04:07:15.894703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.146 [2024-04-19 04:07:15.903578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:246416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x1810ef 00:17:27.146 [2024-04-19 04:07:15.903587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.146 [2024-04-19 04:07:15.903595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:246424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x1810ef 00:17:27.146 [2024-04-19 04:07:15.903602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.146 [2024-04-19 04:07:15.903609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:246432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x1810ef 00:17:27.146 [2024-04-19 04:07:15.903615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.146 [2024-04-19 04:07:15.915335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.146 [2024-04-19 04:07:15.915347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.146 [2024-04-19 04:07:15.915355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:246440 len:8 PRP1 0x0 PRP2 0x0 00:17:27.146 [2024-04-19 04:07:15.915362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.146 [2024-04-19 04:07:15.917488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:27.146 [2024-04-19 04:07:15.917729] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:27.146 [2024-04-19 04:07:15.917741] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.146 [2024-04-19 04:07:15.917746] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:17:27.146 [2024-04-19 04:07:15.917760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.146 [2024-04-19 04:07:15.917768] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:27.146 [2024-04-19 04:07:15.917792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:27.146 [2024-04-19 04:07:15.917798] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:27.146 [2024-04-19 04:07:15.917804] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:27.146 [2024-04-19 04:07:15.917821] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.146 [2024-04-19 04:07:15.917827] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:27.146 [2024-04-19 04:07:16.921148] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:27.146 [2024-04-19 04:07:16.921182] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.146 [2024-04-19 04:07:16.921189] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:17:27.146 [2024-04-19 04:07:16.921207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.146 [2024-04-19 04:07:16.921214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:27.146 [2024-04-19 04:07:16.921228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:27.146 [2024-04-19 04:07:16.921234] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:27.146 [2024-04-19 04:07:16.921241] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:27.146 [2024-04-19 04:07:16.921261] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.146 [2024-04-19 04:07:16.921268] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:27.146 [2024-04-19 04:07:17.923720] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:27.146 [2024-04-19 04:07:17.923752] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.146 [2024-04-19 04:07:17.923760] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:17:27.146 [2024-04-19 04:07:17.923777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.146 [2024-04-19 04:07:17.923784] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:27.146 [2024-04-19 04:07:17.923794] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:27.146 [2024-04-19 04:07:17.923803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:27.146 [2024-04-19 04:07:17.923810] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:27.146 [2024-04-19 04:07:17.923836] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.146 [2024-04-19 04:07:17.923843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:27.146 [2024-04-19 04:07:19.928673] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.146 [2024-04-19 04:07:19.928707] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:17:27.146 [2024-04-19 04:07:19.928729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.146 [2024-04-19 04:07:19.928736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:27.146 [2024-04-19 04:07:19.928747] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:27.146 [2024-04-19 04:07:19.928753] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:27.146 [2024-04-19 04:07:19.928760] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:27.146 [2024-04-19 04:07:19.928781] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.146 [2024-04-19 04:07:19.928789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:27.146 [2024-04-19 04:07:21.934233] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.146 [2024-04-19 04:07:21.934265] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:17:27.146 [2024-04-19 04:07:21.934287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.146 [2024-04-19 04:07:21.934295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:27.146 [2024-04-19 04:07:21.934304] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:27.146 [2024-04-19 04:07:21.934310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:27.146 [2024-04-19 04:07:21.934317] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:27.146 [2024-04-19 04:07:21.934336] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.146 [2024-04-19 04:07:21.934344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:27.146 [2024-04-19 04:07:23.939167] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.146 [2024-04-19 04:07:23.939196] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:17:27.146 [2024-04-19 04:07:23.939213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.146 [2024-04-19 04:07:23.939221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:27.146 [2024-04-19 04:07:23.939230] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:27.146 [2024-04-19 04:07:23.939236] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:27.146 [2024-04-19 04:07:23.939242] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:27.146 [2024-04-19 04:07:23.939261] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.146 [2024-04-19 04:07:23.939271] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:27.146 [2024-04-19 04:07:24.930067] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:27.146 [2024-04-19 04:07:24.930097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.146 [2024-04-19 04:07:24.930106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32597 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:17:27.146 [2024-04-19 04:07:24.930114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.146 [2024-04-19 04:07:24.930120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32597 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:17:27.146 [2024-04-19 04:07:24.930126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.146 [2024-04-19 04:07:24.930132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32597 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:17:27.146 [2024-04-19 04:07:24.930139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:27.146 [2024-04-19 04:07:24.930144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32597 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:17:27.146 [2024-04-19 04:07:24.941018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.146 [2024-04-19 04:07:24.941042] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:27.146 [2024-04-19 04:07:24.941229] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:27.146 [2024-04-19 04:07:24.941259] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.146 [2024-04-19 04:07:24.951269] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.146 [2024-04-19 04:07:24.961292] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.146 [2024-04-19 04:07:24.977703] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.146 [2024-04-19 04:07:24.987560] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:27.146 [2024-04-19 04:07:24.987698] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.146 [2024-04-19 04:07:24.997723] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.007750] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.017774] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.027798] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.037823] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.047849] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.057875] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.067902] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.077928] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.087955] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.097981] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.108006] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.118031] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.128057] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.138082] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.148108] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.158134] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.168160] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.178186] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.188213] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.198239] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.208264] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.218288] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.228313] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.238338] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.248364] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.258390] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.268416] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.278443] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.288470] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.298496] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.308521] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.318546] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.328572] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.338598] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.348622] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.358646] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.368672] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.378698] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.388722] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.398749] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.408773] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.418797] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.428823] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.438848] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.448874] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.458898] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.468925] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.478949] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.488975] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.499001] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.509028] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.519052] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.529078] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.539103] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.549128] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.559152] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.569178] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.579202] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.589227] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.599253] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.609278] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.619303] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.629328] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.639354] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.649380] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.659408] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.669435] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.679460] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.689486] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.699512] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.709539] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.719566] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.729592] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.739618] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.749642] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.759667] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.769691] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.779718] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.789745] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.799769] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.809794] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.819820] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.829844] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.839868] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.849895] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.859919] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.869944] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.879971] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.889997] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.900023] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.147 [2024-04-19 04:07:25.910050] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.148 [2024-04-19 04:07:25.920075] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.148 [2024-04-19 04:07:25.930099] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.148 [2024-04-19 04:07:25.940123] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:27.148 [2024-04-19 04:07:25.944127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.148 [2024-04-19 04:07:25.944623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.148 [2024-04-19 04:07:25.944630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.944988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.944996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.945002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.945009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.945015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.945022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.945028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.945035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.945040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.945047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.945053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.945060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.945066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.945073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.945079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.945086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.945092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.945099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.945104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.945112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.149 [2024-04-19 04:07:25.945119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.149 [2024-04-19 04:07:25.945126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.150 [2024-04-19 04:07:25.945551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.150 [2024-04-19 04:07:25.945558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:27.151 [2024-04-19 04:07:25.945709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1bf0ef 00:17:27.151 [2024-04-19 04:07:25.945723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1bf0ef 00:17:27.151 [2024-04-19 04:07:25.945736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1bf0ef 00:17:27.151 [2024-04-19 04:07:25.945750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1bf0ef 00:17:27.151 [2024-04-19 04:07:25.945764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1bf0ef 00:17:27.151 [2024-04-19 04:07:25.945777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1bf0ef 00:17:27.151 [2024-04-19 04:07:25.945791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.945798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1bf0ef 00:17:27.151 [2024-04-19 04:07:25.945805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32597 cdw0:c25f97c0 sqhd:6530 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.957514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:27.151 [2024-04-19 04:07:25.957525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:27.151 [2024-04-19 04:07:25.957531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108600 len:8 PRP1 0x0 PRP2 0x0 00:17:27.151 [2024-04-19 04:07:25.957537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:27.151 [2024-04-19 04:07:25.957578] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:27.151 [2024-04-19 04:07:25.957805] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:27.151 [2024-04-19 04:07:25.957815] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.151 [2024-04-19 04:07:25.957820] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:17:27.151 [2024-04-19 04:07:25.957832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.151 [2024-04-19 04:07:25.957838] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:27.151 [2024-04-19 04:07:25.957847] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:27.151 [2024-04-19 04:07:25.957853] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:27.151 [2024-04-19 04:07:25.957860] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:27.151 [2024-04-19 04:07:25.957874] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.151 [2024-04-19 04:07:25.957880] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:27.151 [2024-04-19 04:07:26.961230] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:27.151 [2024-04-19 04:07:26.961264] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.151 [2024-04-19 04:07:26.961270] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:17:27.151 [2024-04-19 04:07:26.961290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.151 [2024-04-19 04:07:26.961297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:27.151 [2024-04-19 04:07:26.961309] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:27.151 [2024-04-19 04:07:26.961315] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:27.151 [2024-04-19 04:07:26.961321] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:27.151 [2024-04-19 04:07:26.961341] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.151 [2024-04-19 04:07:26.961347] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:27.151 [2024-04-19 04:07:27.964325] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:27.151 [2024-04-19 04:07:27.964360] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.151 [2024-04-19 04:07:27.964365] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:17:27.151 [2024-04-19 04:07:27.964381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.151 [2024-04-19 04:07:27.964388] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:27.151 [2024-04-19 04:07:27.964415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:27.151 [2024-04-19 04:07:27.964421] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:27.151 [2024-04-19 04:07:27.964428] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:27.151 [2024-04-19 04:07:27.964446] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.151 [2024-04-19 04:07:27.964452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:27.151 [2024-04-19 04:07:29.970478] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.151 [2024-04-19 04:07:29.970514] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:17:27.151 [2024-04-19 04:07:29.970534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.151 [2024-04-19 04:07:29.970542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:27.151 [2024-04-19 04:07:29.970552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:27.151 [2024-04-19 04:07:29.970557] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:27.151 [2024-04-19 04:07:29.970564] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:27.151 [2024-04-19 04:07:29.970584] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.151 [2024-04-19 04:07:29.970590] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:27.151 [2024-04-19 04:07:31.976635] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.151 [2024-04-19 04:07:31.976673] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:17:27.152 [2024-04-19 04:07:31.976693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.152 [2024-04-19 04:07:31.976701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:27.152 [2024-04-19 04:07:31.977479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:27.152 [2024-04-19 04:07:31.977491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:27.152 [2024-04-19 04:07:31.977498] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:27.152 [2024-04-19 04:07:31.978279] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.152 [2024-04-19 04:07:31.978291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:27.152 [2024-04-19 04:07:33.984056] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:27.152 [2024-04-19 04:07:33.984086] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:17:27.152 [2024-04-19 04:07:33.984108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:27.152 [2024-04-19 04:07:33.984116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:27.152 [2024-04-19 04:07:33.984898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:27.152 [2024-04-19 04:07:33.984909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:27.152 [2024-04-19 04:07:33.984916] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:27.152 [2024-04-19 04:07:33.985605] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:27.152 [2024-04-19 04:07:33.985617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:27.152 [2024-04-19 04:07:35.040170] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:27.152 00:17:27.152 Latency(us) 00:17:27.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.152 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:27.152 Verification LBA range: start 0x0 length 0x8000 00:17:27.152 Nvme_mlx_0_0n1 : 90.01 11828.66 46.21 0.00 0.00 10803.88 1856.85 11085390.13 00:17:27.152 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:27.152 Verification LBA range: start 0x0 length 0x8000 00:17:27.152 Nvme_mlx_0_1n1 : 90.01 10442.95 40.79 0.00 0.00 12241.13 1820.44 11085390.13 00:17:27.152 =================================================================================================================== 00:17:27.152 Total : 22271.61 87.00 0.00 0.00 11477.80 1820.44 11085390.13 00:17:27.152 Received shutdown signal, test time was about 90.000000 seconds 00:17:27.152 00:17:27.152 Latency(us) 00:17:27.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.152 =================================================================================================================== 00:17:27.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:27.152 04:08:40 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:17:27.152 04:08:40 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:17:27.152 04:08:40 -- target/device_removal.sh@202 -- # killprocess 308175 00:17:27.152 04:08:40 -- common/autotest_common.sh@936 -- # '[' -z 308175 ']' 00:17:27.152 04:08:40 -- common/autotest_common.sh@940 -- # kill -0 308175 00:17:27.152 04:08:40 -- common/autotest_common.sh@941 -- # uname 00:17:27.152 04:08:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:27.152 04:08:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 308175 00:17:27.152 04:08:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:27.152 04:08:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:27.152 04:08:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 308175' 00:17:27.152 killing process with pid 308175 00:17:27.152 04:08:40 -- common/autotest_common.sh@955 -- # kill 308175 00:17:27.152 04:08:40 -- common/autotest_common.sh@960 -- # wait 308175 00:17:27.152 04:08:40 -- target/device_removal.sh@203 -- # nvmfpid= 00:17:27.152 04:08:40 -- target/device_removal.sh@205 -- # return 0 00:17:27.152 00:17:27.152 real 1m33.122s 00:17:27.152 user 4m27.285s 00:17:27.152 sys 0m5.738s 00:17:27.152 04:08:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:27.152 04:08:40 -- common/autotest_common.sh@10 -- # set +x 00:17:27.152 ************************************ 00:17:27.152 END TEST nvmf_device_removal_pci_remove 00:17:27.152 ************************************ 00:17:27.152 04:08:40 -- target/device_removal.sh@317 -- # nvmftestfini 00:17:27.152 04:08:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:27.152 04:08:40 -- nvmf/common.sh@117 -- # sync 00:17:27.152 04:08:40 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:27.152 04:08:40 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:27.152 04:08:40 -- nvmf/common.sh@120 -- # set +e 00:17:27.152 04:08:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:27.152 04:08:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:27.152 rmmod nvme_rdma 00:17:27.152 rmmod nvme_fabrics 00:17:27.152 04:08:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:27.152 04:08:40 -- nvmf/common.sh@124 -- # set -e 00:17:27.152 04:08:40 -- nvmf/common.sh@125 -- # return 0 00:17:27.152 04:08:40 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:27.152 04:08:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:27.152 04:08:40 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:27.152 04:08:40 -- target/device_removal.sh@318 -- # clean_bond_device 00:17:27.152 04:08:40 -- target/device_removal.sh@240 -- # ip link 00:17:27.152 04:08:40 -- target/device_removal.sh@240 -- # grep bond_nvmf 00:17:27.152 00:17:27.152 real 3m12.336s 00:17:27.152 user 8m55.032s 00:17:27.152 sys 0m15.697s 00:17:27.152 04:08:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:27.152 04:08:40 -- common/autotest_common.sh@10 -- # set +x 00:17:27.152 ************************************ 00:17:27.152 END TEST nvmf_device_removal 00:17:27.152 ************************************ 00:17:27.152 04:08:40 -- nvmf/nvmf.sh@79 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:17:27.152 04:08:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:27.152 04:08:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:27.152 04:08:40 -- common/autotest_common.sh@10 -- # set +x 00:17:27.152 ************************************ 00:17:27.152 START TEST nvmf_srq_overwhelm 00:17:27.152 ************************************ 00:17:27.152 04:08:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:17:27.152 * Looking for test storage... 00:17:27.152 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:27.152 04:08:41 -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.152 04:08:41 -- nvmf/common.sh@7 -- # uname -s 00:17:27.152 04:08:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.152 04:08:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.152 04:08:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.152 04:08:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.152 04:08:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.152 04:08:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.152 04:08:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.152 04:08:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.152 04:08:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.152 04:08:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.152 04:08:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:27.152 04:08:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:27.152 04:08:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.152 04:08:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.152 04:08:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.152 04:08:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.152 04:08:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:27.152 04:08:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.152 04:08:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.152 04:08:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.152 04:08:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.152 04:08:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.152 04:08:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.152 04:08:41 -- paths/export.sh@5 -- # export PATH 00:17:27.152 04:08:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.152 04:08:41 -- nvmf/common.sh@47 -- # : 0 00:17:27.152 04:08:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.152 04:08:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.152 04:08:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.153 04:08:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.153 04:08:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.153 04:08:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.153 04:08:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.153 04:08:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.153 04:08:41 -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.153 04:08:41 -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.153 04:08:41 -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:17:27.153 04:08:41 -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:17:27.153 04:08:41 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:27.153 04:08:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.153 04:08:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:27.153 04:08:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:27.153 04:08:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:27.153 04:08:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.153 04:08:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.153 04:08:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.153 04:08:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:27.153 04:08:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:27.153 04:08:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:27.153 04:08:41 -- common/autotest_common.sh@10 -- # set +x 00:17:32.423 04:08:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:32.423 04:08:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:32.423 04:08:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:32.423 04:08:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:32.423 04:08:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:32.423 04:08:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:32.423 04:08:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:32.423 04:08:46 -- nvmf/common.sh@295 -- # net_devs=() 00:17:32.423 04:08:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:32.423 04:08:46 -- nvmf/common.sh@296 -- # e810=() 00:17:32.423 04:08:46 -- nvmf/common.sh@296 -- # local -ga e810 00:17:32.423 04:08:46 -- nvmf/common.sh@297 -- # x722=() 00:17:32.423 04:08:46 -- nvmf/common.sh@297 -- # local -ga x722 00:17:32.423 04:08:46 -- nvmf/common.sh@298 -- # mlx=() 00:17:32.423 04:08:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:32.423 04:08:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.423 04:08:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.423 04:08:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.423 04:08:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.423 04:08:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.423 04:08:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.423 04:08:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.423 04:08:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.423 04:08:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.423 04:08:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.423 04:08:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.423 04:08:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:32.423 04:08:46 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:32.423 04:08:46 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:32.423 04:08:46 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:32.423 04:08:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:32.423 04:08:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.423 04:08:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:32.423 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:32.423 04:08:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:32.423 04:08:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.423 04:08:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:32.423 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:32.423 04:08:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:32.423 04:08:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:32.423 04:08:46 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.423 04:08:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.423 04:08:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:32.423 04:08:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.423 04:08:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:32.423 Found net devices under 0000:18:00.0: mlx_0_0 00:17:32.423 04:08:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.423 04:08:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.423 04:08:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.423 04:08:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:32.423 04:08:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.423 04:08:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:32.423 Found net devices under 0000:18:00.1: mlx_0_1 00:17:32.423 04:08:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.423 04:08:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:32.423 04:08:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:32.423 04:08:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:32.423 04:08:46 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:32.423 04:08:46 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:32.423 04:08:46 -- nvmf/common.sh@58 -- # uname 00:17:32.423 04:08:46 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:32.423 04:08:46 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:32.423 04:08:46 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:32.423 04:08:46 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:32.423 04:08:46 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:32.423 04:08:46 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:32.423 04:08:46 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:32.424 04:08:46 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:32.424 04:08:46 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:32.424 04:08:46 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:32.424 04:08:46 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:32.424 04:08:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:32.424 04:08:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:32.424 04:08:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:32.424 04:08:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:32.424 04:08:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:32.424 04:08:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:32.424 04:08:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.424 04:08:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:32.424 04:08:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:32.424 04:08:46 -- nvmf/common.sh@105 -- # continue 2 00:17:32.424 04:08:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:32.424 04:08:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.424 04:08:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:32.424 04:08:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.424 04:08:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:32.424 04:08:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:32.424 04:08:46 -- nvmf/common.sh@105 -- # continue 2 00:17:32.424 04:08:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:32.424 04:08:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:32.424 04:08:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:32.424 04:08:46 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:32.424 04:08:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:32.424 04:08:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:32.424 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:32.424 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:32.424 altname enp24s0f0np0 00:17:32.424 altname ens785f0np0 00:17:32.424 inet 192.168.100.8/24 scope global mlx_0_0 00:17:32.424 valid_lft forever preferred_lft forever 00:17:32.424 04:08:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:32.424 04:08:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:32.424 04:08:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:32.424 04:08:46 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:32.424 04:08:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:32.424 04:08:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:32.424 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:32.424 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:32.424 altname enp24s0f1np1 00:17:32.424 altname ens785f1np1 00:17:32.424 inet 192.168.100.9/24 scope global mlx_0_1 00:17:32.424 valid_lft forever preferred_lft forever 00:17:32.424 04:08:46 -- nvmf/common.sh@411 -- # return 0 00:17:32.424 04:08:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:32.424 04:08:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:32.424 04:08:46 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:32.424 04:08:46 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:32.424 04:08:46 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:32.424 04:08:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:32.424 04:08:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:32.424 04:08:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:32.424 04:08:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:32.424 04:08:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:32.424 04:08:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:32.424 04:08:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.424 04:08:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:32.424 04:08:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:32.424 04:08:46 -- nvmf/common.sh@105 -- # continue 2 00:17:32.424 04:08:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:32.424 04:08:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.424 04:08:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:32.424 04:08:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.424 04:08:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:32.424 04:08:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:32.424 04:08:46 -- nvmf/common.sh@105 -- # continue 2 00:17:32.424 04:08:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:32.424 04:08:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:32.424 04:08:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:32.424 04:08:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:32.424 04:08:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:32.424 04:08:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:32.424 04:08:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:32.424 04:08:46 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:32.424 192.168.100.9' 00:17:32.424 04:08:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:32.424 192.168.100.9' 00:17:32.424 04:08:46 -- nvmf/common.sh@446 -- # head -n 1 00:17:32.424 04:08:46 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:32.424 04:08:46 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:32.424 192.168.100.9' 00:17:32.424 04:08:46 -- nvmf/common.sh@447 -- # tail -n +2 00:17:32.424 04:08:46 -- nvmf/common.sh@447 -- # head -n 1 00:17:32.424 04:08:46 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:32.424 04:08:46 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:32.424 04:08:46 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:32.424 04:08:46 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:32.424 04:08:46 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:32.424 04:08:46 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:32.424 04:08:46 -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:17:32.424 04:08:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:32.424 04:08:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:32.424 04:08:46 -- common/autotest_common.sh@10 -- # set +x 00:17:32.424 04:08:46 -- nvmf/common.sh@470 -- # nvmfpid=328211 00:17:32.424 04:08:46 -- nvmf/common.sh@471 -- # waitforlisten 328211 00:17:32.424 04:08:46 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:32.424 04:08:46 -- common/autotest_common.sh@817 -- # '[' -z 328211 ']' 00:17:32.424 04:08:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.424 04:08:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:32.424 04:08:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.424 04:08:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:32.424 04:08:46 -- common/autotest_common.sh@10 -- # set +x 00:17:32.424 [2024-04-19 04:08:46.421302] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:32.424 [2024-04-19 04:08:46.421347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.424 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.424 [2024-04-19 04:08:46.471172] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.424 [2024-04-19 04:08:46.538669] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.424 [2024-04-19 04:08:46.538708] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.424 [2024-04-19 04:08:46.538714] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.424 [2024-04-19 04:08:46.538720] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.424 [2024-04-19 04:08:46.538724] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.424 [2024-04-19 04:08:46.538764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.424 [2024-04-19 04:08:46.538861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.424 [2024-04-19 04:08:46.538957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.424 [2024-04-19 04:08:46.538959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.683 04:08:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:32.683 04:08:47 -- common/autotest_common.sh@850 -- # return 0 00:17:32.683 04:08:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:32.683 04:08:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:32.683 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:17:32.942 04:08:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.942 04:08:47 -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:17:32.942 04:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.942 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:17:32.942 [2024-04-19 04:08:47.258564] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe226c0/0xe26bb0) succeed. 00:17:32.942 [2024-04-19 04:08:47.269254] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe23cb0/0xe68240) succeed. 00:17:32.942 04:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.942 04:08:47 -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:17:32.942 04:08:47 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:32.942 04:08:47 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:17:32.942 04:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.942 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:17:32.942 04:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.942 04:08:47 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:32.942 04:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.942 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:17:32.942 Malloc0 00:17:32.942 04:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.942 04:08:47 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:17:32.942 04:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.942 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:17:32.942 04:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.942 04:08:47 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:32.942 04:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.942 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:17:32.942 [2024-04-19 04:08:47.359462] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:32.942 04:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.942 04:08:47 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:17:33.878 04:08:48 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:17:33.878 04:08:48 -- common/autotest_common.sh@1221 -- # local i=0 00:17:33.878 04:08:48 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:17:33.878 04:08:48 -- common/autotest_common.sh@1222 -- # grep -q -w nvme0n1 00:17:33.878 04:08:48 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:17:33.878 04:08:48 -- common/autotest_common.sh@1228 -- # grep -q -w nvme0n1 00:17:33.878 04:08:48 -- common/autotest_common.sh@1232 -- # return 0 00:17:33.878 04:08:48 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:33.878 04:08:48 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:33.878 04:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.878 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:17:33.878 04:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.878 04:08:48 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:33.878 04:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.878 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:17:33.878 Malloc1 00:17:33.878 04:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.878 04:08:48 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:33.878 04:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.878 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:17:33.878 04:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.878 04:08:48 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:33.878 04:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.878 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:17:34.136 04:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.136 04:08:48 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:35.072 04:08:49 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:17:35.072 04:08:49 -- common/autotest_common.sh@1221 -- # local i=0 00:17:35.072 04:08:49 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:17:35.072 04:08:49 -- common/autotest_common.sh@1222 -- # grep -q -w nvme1n1 00:17:35.072 04:08:49 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:17:35.072 04:08:49 -- common/autotest_common.sh@1228 -- # grep -q -w nvme1n1 00:17:35.072 04:08:49 -- common/autotest_common.sh@1232 -- # return 0 00:17:35.072 04:08:49 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:35.072 04:08:49 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:35.072 04:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.072 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:17:35.072 04:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.072 04:08:49 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:35.072 04:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.072 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:17:35.072 Malloc2 00:17:35.072 04:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.072 04:08:49 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:35.072 04:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.072 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:17:35.072 04:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.072 04:08:49 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:17:35.072 04:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.072 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:17:35.072 04:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.072 04:08:49 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:17:36.008 04:08:50 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:17:36.008 04:08:50 -- common/autotest_common.sh@1221 -- # local i=0 00:17:36.009 04:08:50 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:17:36.009 04:08:50 -- common/autotest_common.sh@1222 -- # grep -q -w nvme2n1 00:17:36.009 04:08:50 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:17:36.009 04:08:50 -- common/autotest_common.sh@1228 -- # grep -q -w nvme2n1 00:17:36.009 04:08:50 -- common/autotest_common.sh@1232 -- # return 0 00:17:36.009 04:08:50 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:36.009 04:08:50 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:17:36.009 04:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.009 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:17:36.009 04:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.009 04:08:50 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:36.009 04:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.009 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:17:36.009 Malloc3 00:17:36.009 04:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.009 04:08:50 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:36.009 04:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.009 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:17:36.009 04:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.009 04:08:50 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:17:36.009 04:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.009 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:17:36.009 04:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.009 04:08:50 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:17:36.945 04:08:51 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:17:36.945 04:08:51 -- common/autotest_common.sh@1221 -- # local i=0 00:17:36.945 04:08:51 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:17:36.945 04:08:51 -- common/autotest_common.sh@1222 -- # grep -q -w nvme3n1 00:17:36.945 04:08:51 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:17:36.945 04:08:51 -- common/autotest_common.sh@1228 -- # grep -q -w nvme3n1 00:17:36.945 04:08:51 -- common/autotest_common.sh@1232 -- # return 0 00:17:36.945 04:08:51 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:36.945 04:08:51 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:17:36.945 04:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.945 04:08:51 -- common/autotest_common.sh@10 -- # set +x 00:17:36.945 04:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.945 04:08:51 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:36.945 04:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.945 04:08:51 -- common/autotest_common.sh@10 -- # set +x 00:17:37.203 Malloc4 00:17:37.204 04:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.204 04:08:51 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:37.204 04:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.204 04:08:51 -- common/autotest_common.sh@10 -- # set +x 00:17:37.204 04:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.204 04:08:51 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:17:37.204 04:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.204 04:08:51 -- common/autotest_common.sh@10 -- # set +x 00:17:37.204 04:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.204 04:08:51 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:17:38.138 04:08:52 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:17:38.138 04:08:52 -- common/autotest_common.sh@1221 -- # local i=0 00:17:38.138 04:08:52 -- common/autotest_common.sh@1222 -- # grep -q -w nvme4n1 00:17:38.138 04:08:52 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:17:38.138 04:08:52 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:17:38.138 04:08:52 -- common/autotest_common.sh@1228 -- # grep -q -w nvme4n1 00:17:38.138 04:08:52 -- common/autotest_common.sh@1232 -- # return 0 00:17:38.138 04:08:52 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:17:38.138 04:08:52 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:17:38.138 04:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.138 04:08:52 -- common/autotest_common.sh@10 -- # set +x 00:17:38.138 04:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.138 04:08:52 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:38.138 04:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.138 04:08:52 -- common/autotest_common.sh@10 -- # set +x 00:17:38.138 Malloc5 00:17:38.138 04:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.138 04:08:52 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:38.138 04:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.138 04:08:52 -- common/autotest_common.sh@10 -- # set +x 00:17:38.138 04:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.138 04:08:52 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:17:38.138 04:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.138 04:08:52 -- common/autotest_common.sh@10 -- # set +x 00:17:38.138 04:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.138 04:08:52 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:17:39.072 04:08:53 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:17:39.072 04:08:53 -- common/autotest_common.sh@1221 -- # local i=0 00:17:39.072 04:08:53 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:17:39.072 04:08:53 -- common/autotest_common.sh@1222 -- # grep -q -w nvme5n1 00:17:39.072 04:08:53 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:17:39.072 04:08:53 -- common/autotest_common.sh@1228 -- # grep -q -w nvme5n1 00:17:39.072 04:08:53 -- common/autotest_common.sh@1232 -- # return 0 00:17:39.072 04:08:53 -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:17:39.072 [global] 00:17:39.072 thread=1 00:17:39.072 invalidate=1 00:17:39.072 rw=read 00:17:39.072 time_based=1 00:17:39.072 runtime=10 00:17:39.072 ioengine=libaio 00:17:39.072 direct=1 00:17:39.072 bs=1048576 00:17:39.072 iodepth=128 00:17:39.072 norandommap=1 00:17:39.072 numjobs=13 00:17:39.072 00:17:39.072 [job0] 00:17:39.072 filename=/dev/nvme0n1 00:17:39.072 [job1] 00:17:39.072 filename=/dev/nvme1n1 00:17:39.072 [job2] 00:17:39.072 filename=/dev/nvme2n1 00:17:39.072 [job3] 00:17:39.072 filename=/dev/nvme3n1 00:17:39.072 [job4] 00:17:39.072 filename=/dev/nvme4n1 00:17:39.072 [job5] 00:17:39.072 filename=/dev/nvme5n1 00:17:39.349 Could not set queue depth (nvme0n1) 00:17:39.349 Could not set queue depth (nvme1n1) 00:17:39.349 Could not set queue depth (nvme2n1) 00:17:39.349 Could not set queue depth (nvme3n1) 00:17:39.349 Could not set queue depth (nvme4n1) 00:17:39.349 Could not set queue depth (nvme5n1) 00:17:39.611 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:39.611 ... 00:17:39.611 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:39.611 ... 00:17:39.611 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:39.611 ... 00:17:39.611 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:39.611 ... 00:17:39.611 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:39.611 ... 00:17:39.611 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:39.611 ... 00:17:39.611 fio-3.35 00:17:39.611 Starting 78 threads 00:17:54.508 00:17:54.508 job0: (groupid=0, jobs=1): err= 0: pid=329786: Fri Apr 19 04:09:08 2024 00:17:54.508 read: IOPS=136, BW=137MiB/s (143MB/s)(1636MiB/11978msec) 00:17:54.508 slat (usec): min=40, max=2125.1k, avg=7284.71, stdev=73299.05 00:17:54.508 clat (msec): min=53, max=4596, avg=896.03, stdev=1020.12 00:17:54.508 lat (msec): min=293, max=4618, avg=903.32, stdev=1022.89 00:17:54.508 clat percentiles (msec): 00:17:54.508 | 1.00th=[ 309], 5.00th=[ 372], 10.00th=[ 405], 20.00th=[ 477], 00:17:54.508 | 30.00th=[ 514], 40.00th=[ 575], 50.00th=[ 617], 60.00th=[ 651], 00:17:54.508 | 70.00th=[ 709], 80.00th=[ 776], 90.00th=[ 911], 95.00th=[ 4463], 00:17:54.508 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:17:54.508 | 99.99th=[ 4597] 00:17:54.508 bw ( KiB/s): min=22528, max=344064, per=6.62%, avg=193010.69, stdev=87546.19, samples=16 00:17:54.508 iops : min= 22, max= 336, avg=188.44, stdev=85.55, samples=16 00:17:54.508 lat (msec) : 100=0.06%, 500=25.61%, 750=49.08%, 1000=16.81%, >=2000=8.44% 00:17:54.508 cpu : usr=0.04%, sys=1.39%, ctx=1859, majf=0, minf=32769 00:17:54.508 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:17:54.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.508 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.508 issued rwts: total=1636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.508 job0: (groupid=0, jobs=1): err= 0: pid=329787: Fri Apr 19 04:09:08 2024 00:17:54.508 read: IOPS=15, BW=15.0MiB/s (15.8MB/s)(212MiB/14111msec) 00:17:54.508 slat (usec): min=543, max=2123.2k, avg=56613.90, stdev=297366.84 00:17:54.508 clat (msec): min=1118, max=14006, avg=7816.50, stdev=4835.37 00:17:54.508 lat (msec): min=1123, max=14009, avg=7873.11, stdev=4832.83 00:17:54.508 clat percentiles (msec): 00:17:54.508 | 1.00th=[ 1116], 5.00th=[ 1133], 10.00th=[ 1167], 20.00th=[ 2056], 00:17:54.508 | 30.00th=[ 2299], 40.00th=[ 6074], 50.00th=[11342], 60.00th=[11610], 00:17:54.508 | 70.00th=[11879], 80.00th=[12147], 90.00th=[12281], 95.00th=[12818], 00:17:54.508 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:54.508 | 99.99th=[14026] 00:17:54.508 bw ( KiB/s): min= 2048, max=88064, per=0.75%, avg=21760.50, stdev=31839.06, samples=8 00:17:54.508 iops : min= 2, max= 86, avg=21.25, stdev=31.09, samples=8 00:17:54.508 lat (msec) : 2000=16.98%, >=2000=83.02% 00:17:54.508 cpu : usr=0.00%, sys=0.61%, ctx=421, majf=0, minf=32769 00:17:54.508 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.8%, 16=7.5%, 32=15.1%, >=64=70.3% 00:17:54.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.508 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:17:54.508 issued rwts: total=212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.508 job0: (groupid=0, jobs=1): err= 0: pid=329788: Fri Apr 19 04:09:08 2024 00:17:54.508 read: IOPS=69, BW=69.5MiB/s (72.9MB/s)(701MiB/10087msec) 00:17:54.508 slat (usec): min=63, max=2108.7k, avg=14272.15, stdev=111817.87 00:17:54.508 clat (msec): min=78, max=5887, avg=1746.56, stdev=1756.41 00:17:54.508 lat (msec): min=91, max=5917, avg=1760.83, stdev=1763.39 00:17:54.508 clat percentiles (msec): 00:17:54.508 | 1.00th=[ 115], 5.00th=[ 456], 10.00th=[ 684], 20.00th=[ 768], 00:17:54.508 | 30.00th=[ 827], 40.00th=[ 902], 50.00th=[ 936], 60.00th=[ 1011], 00:17:54.508 | 70.00th=[ 1284], 80.00th=[ 1586], 90.00th=[ 5336], 95.00th=[ 5671], 00:17:54.508 | 99.00th=[ 5873], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:17:54.508 | 99.99th=[ 5873] 00:17:54.508 bw ( KiB/s): min= 8192, max=161792, per=3.36%, avg=97891.50, stdev=48710.49, samples=12 00:17:54.508 iops : min= 8, max= 158, avg=95.50, stdev=47.66, samples=12 00:17:54.508 lat (msec) : 100=0.43%, 250=2.14%, 500=3.00%, 750=7.85%, 1000=45.93% 00:17:54.508 lat (msec) : 2000=21.83%, >=2000=18.83% 00:17:54.508 cpu : usr=0.02%, sys=1.13%, ctx=1053, majf=0, minf=32769 00:17:54.508 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:17:54.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.508 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:54.508 issued rwts: total=701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.508 job0: (groupid=0, jobs=1): err= 0: pid=329789: Fri Apr 19 04:09:08 2024 00:17:54.508 read: IOPS=8, BW=8476KiB/s (8680kB/s)(99.0MiB/11960msec) 00:17:54.508 slat (usec): min=614, max=2075.2k, avg=120257.12, stdev=391748.14 00:17:54.508 clat (msec): min=53, max=11956, avg=6678.38, stdev=3175.72 00:17:54.508 lat (msec): min=2054, max=11959, avg=6798.64, stdev=3147.61 00:17:54.508 clat percentiles (msec): 00:17:54.508 | 1.00th=[ 54], 5.00th=[ 2106], 10.00th=[ 3104], 20.00th=[ 3373], 00:17:54.508 | 30.00th=[ 4010], 40.00th=[ 4279], 50.00th=[ 7886], 60.00th=[ 8087], 00:17:54.508 | 70.00th=[ 8356], 80.00th=[ 8658], 90.00th=[11879], 95.00th=[12013], 00:17:54.508 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:54.508 | 99.99th=[12013] 00:17:54.508 lat (msec) : 100=1.01%, >=2000=98.99% 00:17:54.508 cpu : usr=0.00%, sys=0.50%, ctx=382, majf=0, minf=25345 00:17:54.508 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.1%, 16=16.2%, 32=32.3%, >=64=36.4% 00:17:54.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.508 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:54.508 issued rwts: total=99,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.508 job0: (groupid=0, jobs=1): err= 0: pid=329790: Fri Apr 19 04:09:08 2024 00:17:54.508 read: IOPS=25, BW=25.4MiB/s (26.7MB/s)(256MiB/10059msec) 00:17:54.508 slat (usec): min=419, max=2058.2k, avg=39060.20, stdev=209489.79 00:17:54.508 clat (msec): min=58, max=8411, avg=2118.63, stdev=2013.98 00:17:54.508 lat (msec): min=60, max=8442, avg=2157.69, stdev=2051.74 00:17:54.508 clat percentiles (msec): 00:17:54.508 | 1.00th=[ 62], 5.00th=[ 161], 10.00th=[ 435], 20.00th=[ 718], 00:17:54.508 | 30.00th=[ 1003], 40.00th=[ 1284], 50.00th=[ 1485], 60.00th=[ 1502], 00:17:54.508 | 70.00th=[ 1552], 80.00th=[ 5000], 90.00th=[ 5269], 95.00th=[ 5336], 00:17:54.508 | 99.00th=[ 8356], 99.50th=[ 8356], 99.90th=[ 8423], 99.95th=[ 8423], 00:17:54.508 | 99.99th=[ 8423] 00:17:54.508 bw ( KiB/s): min=14336, max=88064, per=2.27%, avg=66048.00, stdev=35130.81, samples=4 00:17:54.508 iops : min= 14, max= 86, avg=64.50, stdev=34.31, samples=4 00:17:54.508 lat (msec) : 100=2.34%, 250=3.52%, 500=5.86%, 750=9.38%, 1000=8.98% 00:17:54.508 lat (msec) : 2000=46.88%, >=2000=23.05% 00:17:54.508 cpu : usr=0.00%, sys=0.65%, ctx=546, majf=0, minf=32769 00:17:54.508 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.2%, 32=12.5%, >=64=75.4% 00:17:54.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.508 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:17:54.508 issued rwts: total=256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.508 job0: (groupid=0, jobs=1): err= 0: pid=329791: Fri Apr 19 04:09:08 2024 00:17:54.508 read: IOPS=3, BW=3134KiB/s (3209kB/s)(43.0MiB/14052msec) 00:17:54.508 slat (usec): min=752, max=2132.0k, avg=276960.74, stdev=686379.13 00:17:54.508 clat (msec): min=2141, max=14049, avg=9225.66, stdev=4307.49 00:17:54.508 lat (msec): min=4203, max=14051, avg=9502.62, stdev=4223.21 00:17:54.508 clat percentiles (msec): 00:17:54.508 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4245], 00:17:54.508 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[12818], 00:17:54.508 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:17:54.508 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:54.508 | 99.99th=[14026] 00:17:54.508 lat (msec) : >=2000=100.00% 00:17:54.508 cpu : usr=0.00%, sys=0.21%, ctx=82, majf=0, minf=11009 00:17:54.508 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:17:54.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.508 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:54.508 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.508 job0: (groupid=0, jobs=1): err= 0: pid=329792: Fri Apr 19 04:09:08 2024 00:17:54.508 read: IOPS=6, BW=6691KiB/s (6852kB/s)(92.0MiB/14079msec) 00:17:54.508 slat (usec): min=484, max=2165.6k, avg=130115.07, stdev=481597.63 00:17:54.508 clat (msec): min=2107, max=14068, avg=12684.23, stdev=2385.85 00:17:54.508 lat (msec): min=4199, max=14078, avg=12814.35, stdev=2113.56 00:17:54.508 clat percentiles (msec): 00:17:54.508 | 1.00th=[ 2106], 5.00th=[ 6409], 10.00th=[ 8557], 20.00th=[13355], 00:17:54.508 | 30.00th=[13489], 40.00th=[13489], 50.00th=[13624], 60.00th=[13624], 00:17:54.508 | 70.00th=[13758], 80.00th=[13758], 90.00th=[13892], 95.00th=[14026], 00:17:54.508 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:54.508 | 99.99th=[14026] 00:17:54.508 lat (msec) : >=2000=100.00% 00:17:54.508 cpu : usr=0.00%, sys=0.47%, ctx=125, majf=0, minf=23553 00:17:54.508 IO depths : 1=1.1%, 2=2.2%, 4=4.3%, 8=8.7%, 16=17.4%, 32=34.8%, >=64=31.5% 00:17:54.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.508 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:54.508 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.508 job0: (groupid=0, jobs=1): err= 0: pid=329793: Fri Apr 19 04:09:08 2024 00:17:54.508 read: IOPS=71, BW=71.1MiB/s (74.6MB/s)(996MiB/14004msec) 00:17:54.508 slat (usec): min=65, max=2104.1k, avg=11940.68, stdev=119007.12 00:17:54.508 clat (msec): min=229, max=8561, avg=1345.91, stdev=2221.23 00:17:54.508 lat (msec): min=231, max=8584, avg=1357.85, stdev=2229.89 00:17:54.508 clat percentiles (msec): 00:17:54.508 | 1.00th=[ 230], 5.00th=[ 230], 10.00th=[ 232], 20.00th=[ 234], 00:17:54.508 | 30.00th=[ 234], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 634], 00:17:54.508 | 70.00th=[ 885], 80.00th=[ 1011], 90.00th=[ 6946], 95.00th=[ 7080], 00:17:54.508 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 8557], 99.95th=[ 8557], 00:17:54.508 | 99.99th=[ 8557] 00:17:54.508 bw ( KiB/s): min= 2052, max=551856, per=6.09%, avg=177712.20, stdev=214973.78, samples=10 00:17:54.508 iops : min= 2, max= 538, avg=173.40, stdev=209.81, samples=10 00:17:54.508 lat (msec) : 250=55.02%, 500=3.31%, 750=3.31%, 1000=17.57%, 2000=5.72% 00:17:54.508 lat (msec) : >=2000=15.06% 00:17:54.508 cpu : usr=0.05%, sys=0.94%, ctx=1800, majf=0, minf=32769 00:17:54.508 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:17:54.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.508 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.508 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.508 job0: (groupid=0, jobs=1): err= 0: pid=329794: Fri Apr 19 04:09:08 2024 00:17:54.508 read: IOPS=28, BW=28.6MiB/s (30.0MB/s)(404MiB/14128msec) 00:17:54.509 slat (usec): min=42, max=2038.5k, avg=29610.98, stdev=199653.87 00:17:54.509 clat (msec): min=832, max=7230, avg=3486.23, stdev=2568.75 00:17:54.509 lat (msec): min=834, max=7230, avg=3515.84, stdev=2571.77 00:17:54.509 clat percentiles (msec): 00:17:54.509 | 1.00th=[ 835], 5.00th=[ 835], 10.00th=[ 844], 20.00th=[ 877], 00:17:54.509 | 30.00th=[ 911], 40.00th=[ 1020], 50.00th=[ 2735], 60.00th=[ 4279], 00:17:54.509 | 70.00th=[ 5940], 80.00th=[ 6678], 90.00th=[ 6946], 95.00th=[ 7148], 00:17:54.509 | 99.00th=[ 7215], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:17:54.509 | 99.99th=[ 7215] 00:17:54.509 bw ( KiB/s): min= 2052, max=155648, per=2.78%, avg=81042.86, stdev=64382.18, samples=7 00:17:54.509 iops : min= 2, max= 152, avg=79.14, stdev=62.87, samples=7 00:17:54.509 lat (msec) : 1000=37.62%, 2000=4.95%, >=2000=57.43% 00:17:54.509 cpu : usr=0.02%, sys=0.82%, ctx=398, majf=0, minf=32769 00:17:54.509 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=7.9%, >=64=84.4% 00:17:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.509 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:54.509 issued rwts: total=404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.509 job0: (groupid=0, jobs=1): err= 0: pid=329795: Fri Apr 19 04:09:08 2024 00:17:54.509 read: IOPS=89, BW=89.1MiB/s (93.5MB/s)(900MiB/10097msec) 00:17:54.509 slat (usec): min=43, max=117566, avg=11108.48, stdev=21471.58 00:17:54.509 clat (msec): min=96, max=2686, avg=1377.66, stdev=610.01 00:17:54.509 lat (msec): min=152, max=2715, avg=1388.76, stdev=612.86 00:17:54.509 clat percentiles (msec): 00:17:54.509 | 1.00th=[ 296], 5.00th=[ 735], 10.00th=[ 818], 20.00th=[ 852], 00:17:54.509 | 30.00th=[ 919], 40.00th=[ 1070], 50.00th=[ 1150], 60.00th=[ 1334], 00:17:54.509 | 70.00th=[ 1603], 80.00th=[ 2039], 90.00th=[ 2433], 95.00th=[ 2534], 00:17:54.509 | 99.00th=[ 2635], 99.50th=[ 2635], 99.90th=[ 2702], 99.95th=[ 2702], 00:17:54.509 | 99.99th=[ 2702] 00:17:54.509 bw ( KiB/s): min=28672, max=155648, per=2.86%, avg=83321.26, stdev=43477.09, samples=19 00:17:54.509 iops : min= 28, max= 152, avg=81.37, stdev=42.46, samples=19 00:17:54.509 lat (msec) : 100=0.11%, 250=0.78%, 500=1.22%, 750=4.00%, 1000=29.22% 00:17:54.509 lat (msec) : 2000=42.78%, >=2000=21.89% 00:17:54.509 cpu : usr=0.01%, sys=1.45%, ctx=1712, majf=0, minf=32769 00:17:54.509 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:17:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.509 issued rwts: total=900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.509 job0: (groupid=0, jobs=1): err= 0: pid=329796: Fri Apr 19 04:09:08 2024 00:17:54.509 read: IOPS=84, BW=84.5MiB/s (88.6MB/s)(1006MiB/11911msec) 00:17:54.509 slat (usec): min=37, max=1411.9k, avg=9959.88, stdev=46291.41 00:17:54.509 clat (msec): min=161, max=4362, avg=1363.50, stdev=1079.61 00:17:54.509 lat (msec): min=162, max=4363, avg=1373.46, stdev=1083.85 00:17:54.509 clat percentiles (msec): 00:17:54.509 | 1.00th=[ 171], 5.00th=[ 205], 10.00th=[ 239], 20.00th=[ 351], 00:17:54.509 | 30.00th=[ 634], 40.00th=[ 684], 50.00th=[ 1234], 60.00th=[ 1603], 00:17:54.509 | 70.00th=[ 1754], 80.00th=[ 2165], 90.00th=[ 2500], 95.00th=[ 3943], 00:17:54.509 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4396], 99.95th=[ 4396], 00:17:54.509 | 99.99th=[ 4396] 00:17:54.509 bw ( KiB/s): min= 1488, max=483328, per=3.85%, avg=112398.38, stdev=116800.88, samples=16 00:17:54.509 iops : min= 1, max= 472, avg=109.56, stdev=114.18, samples=16 00:17:54.509 lat (msec) : 250=13.92%, 500=10.83%, 750=19.18%, 1000=3.18%, 2000=29.32% 00:17:54.509 lat (msec) : >=2000=23.56% 00:17:54.509 cpu : usr=0.03%, sys=1.03%, ctx=2117, majf=0, minf=32769 00:17:54.509 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:17:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.509 issued rwts: total=1006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.509 job0: (groupid=0, jobs=1): err= 0: pid=329797: Fri Apr 19 04:09:08 2024 00:17:54.509 read: IOPS=40, BW=40.7MiB/s (42.7MB/s)(411MiB/10087msec) 00:17:54.509 slat (usec): min=104, max=2069.0k, avg=24349.34, stdev=126438.72 00:17:54.509 clat (msec): min=76, max=6685, avg=2942.59, stdev=2296.04 00:17:54.509 lat (msec): min=120, max=6700, avg=2966.94, stdev=2303.06 00:17:54.509 clat percentiles (msec): 00:17:54.509 | 1.00th=[ 176], 5.00th=[ 359], 10.00th=[ 625], 20.00th=[ 1083], 00:17:54.509 | 30.00th=[ 1452], 40.00th=[ 1636], 50.00th=[ 1770], 60.00th=[ 2039], 00:17:54.509 | 70.00th=[ 5201], 80.00th=[ 6275], 90.00th=[ 6544], 95.00th=[ 6678], 00:17:54.509 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:17:54.509 | 99.99th=[ 6678] 00:17:54.509 bw ( KiB/s): min= 2048, max=90112, per=1.53%, avg=44583.38, stdev=27312.90, samples=13 00:17:54.509 iops : min= 2, max= 88, avg=43.54, stdev=26.67, samples=13 00:17:54.509 lat (msec) : 100=0.24%, 250=1.95%, 500=5.60%, 750=4.87%, 1000=5.60% 00:17:54.509 lat (msec) : 2000=37.71%, >=2000=44.04% 00:17:54.509 cpu : usr=0.01%, sys=1.18%, ctx=1045, majf=0, minf=32769 00:17:54.509 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.8%, >=64=84.7% 00:17:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.509 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:54.509 issued rwts: total=411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.509 job0: (groupid=0, jobs=1): err= 0: pid=329798: Fri Apr 19 04:09:08 2024 00:17:54.509 read: IOPS=168, BW=169MiB/s (177MB/s)(1699MiB/10080msec) 00:17:54.509 slat (usec): min=33, max=193839, avg=5892.96, stdev=13506.15 00:17:54.509 clat (msec): min=64, max=2157, avg=686.55, stdev=519.99 00:17:54.509 lat (msec): min=110, max=2162, avg=692.44, stdev=523.26 00:17:54.509 clat percentiles (msec): 00:17:54.509 | 1.00th=[ 232], 5.00th=[ 234], 10.00th=[ 234], 20.00th=[ 239], 00:17:54.509 | 30.00th=[ 243], 40.00th=[ 271], 50.00th=[ 575], 60.00th=[ 693], 00:17:54.509 | 70.00th=[ 944], 80.00th=[ 1150], 90.00th=[ 1469], 95.00th=[ 1720], 00:17:54.509 | 99.00th=[ 2089], 99.50th=[ 2140], 99.90th=[ 2165], 99.95th=[ 2165], 00:17:54.509 | 99.99th=[ 2165] 00:17:54.509 bw ( KiB/s): min=47104, max=551856, per=6.49%, avg=189238.59, stdev=177723.25, samples=17 00:17:54.509 iops : min= 46, max= 538, avg=184.65, stdev=173.52, samples=17 00:17:54.509 lat (msec) : 100=0.06%, 250=35.79%, 500=13.30%, 750=14.13%, 1000=8.24% 00:17:54.509 lat (msec) : 2000=26.55%, >=2000=1.94% 00:17:54.509 cpu : usr=0.01%, sys=1.35%, ctx=3171, majf=0, minf=32769 00:17:54.509 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:17:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.509 issued rwts: total=1699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.509 job1: (groupid=0, jobs=1): err= 0: pid=329799: Fri Apr 19 04:09:08 2024 00:17:54.509 read: IOPS=3, BW=3339KiB/s (3419kB/s)(39.0MiB/11961msec) 00:17:54.509 slat (msec): min=2, max=2118, avg=305.13, stdev=713.57 00:17:54.509 clat (msec): min=60, max=11945, avg=7068.59, stdev=4009.59 00:17:54.509 lat (msec): min=2119, max=11960, avg=7373.72, stdev=3913.91 00:17:54.509 clat percentiles (msec): 00:17:54.509 | 1.00th=[ 61], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2198], 00:17:54.509 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6477], 60.00th=[ 8557], 00:17:54.509 | 70.00th=[10671], 80.00th=[11879], 90.00th=[11879], 95.00th=[11879], 00:17:54.509 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:17:54.509 | 99.99th=[11879] 00:17:54.509 lat (msec) : 100=2.56%, >=2000=97.44% 00:17:54.509 cpu : usr=0.00%, sys=0.18%, ctx=83, majf=0, minf=9985 00:17:54.509 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:17:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.509 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:54.509 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.509 job1: (groupid=0, jobs=1): err= 0: pid=329800: Fri Apr 19 04:09:08 2024 00:17:54.509 read: IOPS=49, BW=49.7MiB/s (52.1MB/s)(501MiB/10083msec) 00:17:54.509 slat (usec): min=36, max=2059.8k, avg=19984.24, stdev=124230.16 00:17:54.509 clat (msec): min=69, max=5380, avg=2031.50, stdev=1885.27 00:17:54.509 lat (msec): min=84, max=5402, avg=2051.49, stdev=1891.27 00:17:54.509 clat percentiles (msec): 00:17:54.509 | 1.00th=[ 136], 5.00th=[ 241], 10.00th=[ 409], 20.00th=[ 693], 00:17:54.509 | 30.00th=[ 768], 40.00th=[ 1133], 50.00th=[ 1234], 60.00th=[ 1351], 00:17:54.509 | 70.00th=[ 1519], 80.00th=[ 5201], 90.00th=[ 5269], 95.00th=[ 5269], 00:17:54.509 | 99.00th=[ 5336], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:17:54.509 | 99.99th=[ 5403] 00:17:54.509 bw ( KiB/s): min= 6144, max=184320, per=2.62%, avg=76485.90, stdev=58609.91, samples=10 00:17:54.509 iops : min= 6, max= 180, avg=74.60, stdev=57.26, samples=10 00:17:54.509 lat (msec) : 100=0.60%, 250=5.79%, 500=7.58%, 750=14.37%, 1000=7.39% 00:17:54.509 lat (msec) : 2000=37.72%, >=2000=26.55% 00:17:54.509 cpu : usr=0.00%, sys=0.74%, ctx=938, majf=0, minf=32769 00:17:54.509 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.4% 00:17:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.509 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:54.509 issued rwts: total=501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.509 job1: (groupid=0, jobs=1): err= 0: pid=329801: Fri Apr 19 04:09:08 2024 00:17:54.509 read: IOPS=24, BW=24.2MiB/s (25.4MB/s)(291MiB/12029msec) 00:17:54.509 slat (usec): min=48, max=1993.8k, avg=34448.20, stdev=177014.22 00:17:54.509 clat (msec): min=2002, max=7960, avg=3989.59, stdev=887.24 00:17:54.509 lat (msec): min=2071, max=7972, avg=4024.04, stdev=892.16 00:17:54.509 clat percentiles (msec): 00:17:54.509 | 1.00th=[ 2072], 5.00th=[ 2232], 10.00th=[ 3037], 20.00th=[ 3507], 00:17:54.509 | 30.00th=[ 3675], 40.00th=[ 3977], 50.00th=[ 4111], 60.00th=[ 4144], 00:17:54.509 | 70.00th=[ 4178], 80.00th=[ 4279], 90.00th=[ 4933], 95.00th=[ 6141], 00:17:54.509 | 99.00th=[ 6879], 99.50th=[ 6946], 99.90th=[ 7953], 99.95th=[ 7953], 00:17:54.509 | 99.99th=[ 7953] 00:17:54.509 bw ( KiB/s): min= 1992, max=67584, per=1.28%, avg=37298.78, stdev=26672.36, samples=9 00:17:54.509 iops : min= 1, max= 66, avg=36.22, stdev=26.10, samples=9 00:17:54.509 lat (msec) : >=2000=100.00% 00:17:54.509 cpu : usr=0.02%, sys=0.78%, ctx=569, majf=0, minf=32769 00:17:54.509 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:17:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.509 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:54.509 issued rwts: total=291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.509 job1: (groupid=0, jobs=1): err= 0: pid=329802: Fri Apr 19 04:09:08 2024 00:17:54.509 read: IOPS=68, BW=68.2MiB/s (71.5MB/s)(960MiB/14075msec) 00:17:54.509 slat (usec): min=39, max=2095.5k, avg=12452.31, stdev=116661.71 00:17:54.509 clat (msec): min=354, max=8904, avg=1794.83, stdev=2671.74 00:17:54.509 lat (msec): min=355, max=8906, avg=1807.28, stdev=2679.66 00:17:54.509 clat percentiles (msec): 00:17:54.509 | 1.00th=[ 355], 5.00th=[ 359], 10.00th=[ 359], 20.00th=[ 380], 00:17:54.509 | 30.00th=[ 600], 40.00th=[ 768], 50.00th=[ 852], 60.00th=[ 953], 00:17:54.509 | 70.00th=[ 995], 80.00th=[ 1167], 90.00th=[ 8658], 95.00th=[ 8792], 00:17:54.509 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:17:54.509 | 99.99th=[ 8926] 00:17:54.509 bw ( KiB/s): min= 2052, max=356352, per=4.50%, avg=131203.00, stdev=118822.56, samples=13 00:17:54.509 iops : min= 2, max= 348, avg=128.00, stdev=116.11, samples=13 00:17:54.509 lat (msec) : 500=27.19%, 750=10.31%, 1000=33.33%, 2000=15.00%, >=2000=14.17% 00:17:54.509 cpu : usr=0.02%, sys=0.87%, ctx=1544, majf=0, minf=32769 00:17:54.509 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:17:54.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.509 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.509 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.509 job1: (groupid=0, jobs=1): err= 0: pid=329803: Fri Apr 19 04:09:08 2024 00:17:54.509 read: IOPS=68, BW=68.8MiB/s (72.2MB/s)(821MiB/11928msec) 00:17:54.509 slat (usec): min=32, max=2115.7k, avg=14451.40, stdev=125826.64 00:17:54.509 clat (msec): min=60, max=7003, avg=1748.58, stdev=2015.41 00:17:54.509 lat (msec): min=633, max=7005, avg=1763.03, stdev=2020.58 00:17:54.509 clat percentiles (msec): 00:17:54.509 | 1.00th=[ 634], 5.00th=[ 676], 10.00th=[ 693], 20.00th=[ 726], 00:17:54.509 | 30.00th=[ 726], 40.00th=[ 760], 50.00th=[ 802], 60.00th=[ 902], 00:17:54.509 | 70.00th=[ 1133], 80.00th=[ 1385], 90.00th=[ 6544], 95.00th=[ 6812], 00:17:54.509 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:17:54.509 | 99.99th=[ 7013] 00:17:54.509 bw ( KiB/s): min= 4096, max=196608, per=4.06%, avg=118272.00, stdev=71704.60, samples=12 00:17:54.509 iops : min= 4, max= 192, avg=115.50, stdev=70.02, samples=12 00:17:54.510 lat (msec) : 100=0.12%, 750=35.57%, 1000=28.87%, 2000=17.17%, >=2000=18.27% 00:17:54.510 cpu : usr=0.01%, sys=0.90%, ctx=1012, majf=0, minf=32769 00:17:54.510 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.3% 00:17:54.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.510 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.510 issued rwts: total=821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.510 job1: (groupid=0, jobs=1): err= 0: pid=329804: Fri Apr 19 04:09:08 2024 00:17:54.510 read: IOPS=38, BW=38.5MiB/s (40.4MB/s)(389MiB/10101msec) 00:17:54.510 slat (usec): min=53, max=2076.0k, avg=25719.56, stdev=148533.51 00:17:54.510 clat (msec): min=93, max=7834, avg=3120.09, stdev=2360.10 00:17:54.510 lat (msec): min=113, max=7840, avg=3145.81, stdev=2366.92 00:17:54.510 clat percentiles (msec): 00:17:54.510 | 1.00th=[ 215], 5.00th=[ 464], 10.00th=[ 743], 20.00th=[ 1150], 00:17:54.510 | 30.00th=[ 1385], 40.00th=[ 1452], 50.00th=[ 1787], 60.00th=[ 3540], 00:17:54.510 | 70.00th=[ 5000], 80.00th=[ 6678], 90.00th=[ 6745], 95.00th=[ 7013], 00:17:54.510 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7819], 99.95th=[ 7819], 00:17:54.510 | 99.99th=[ 7819] 00:17:54.510 bw ( KiB/s): min= 4096, max=110592, per=1.84%, avg=53657.60, stdev=33189.12, samples=10 00:17:54.510 iops : min= 4, max= 108, avg=52.40, stdev=32.41, samples=10 00:17:54.510 lat (msec) : 100=0.26%, 250=1.03%, 500=4.11%, 750=5.14%, 1000=4.11% 00:17:54.510 lat (msec) : 2000=41.13%, >=2000=44.22% 00:17:54.510 cpu : usr=0.02%, sys=1.05%, ctx=697, majf=0, minf=32120 00:17:54.510 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.2%, >=64=83.8% 00:17:54.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.510 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:54.510 issued rwts: total=389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.510 job1: (groupid=0, jobs=1): err= 0: pid=329805: Fri Apr 19 04:09:08 2024 00:17:54.510 read: IOPS=142, BW=142MiB/s (149MB/s)(1703MiB/11966msec) 00:17:54.510 slat (usec): min=39, max=2111.8k, avg=6986.75, stdev=71706.47 00:17:54.510 clat (msec): min=60, max=4591, avg=850.93, stdev=1024.74 00:17:54.510 lat (msec): min=288, max=4592, avg=857.92, stdev=1027.72 00:17:54.510 clat percentiles (msec): 00:17:54.510 | 1.00th=[ 296], 5.00th=[ 313], 10.00th=[ 342], 20.00th=[ 388], 00:17:54.510 | 30.00th=[ 397], 40.00th=[ 485], 50.00th=[ 535], 60.00th=[ 600], 00:17:54.510 | 70.00th=[ 667], 80.00th=[ 852], 90.00th=[ 1116], 95.00th=[ 4396], 00:17:54.510 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:17:54.510 | 99.99th=[ 4597] 00:17:54.510 bw ( KiB/s): min= 6144, max=415744, per=6.91%, avg=201605.88, stdev=115373.73, samples=16 00:17:54.510 iops : min= 6, max= 406, avg=196.75, stdev=112.72, samples=16 00:17:54.510 lat (msec) : 100=0.06%, 500=45.21%, 750=31.71%, 1000=7.46%, 2000=7.52% 00:17:54.510 lat (msec) : >=2000=8.04% 00:17:54.510 cpu : usr=0.08%, sys=1.24%, ctx=1842, majf=0, minf=32769 00:17:54.510 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:17:54.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.510 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.510 issued rwts: total=1703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.510 job1: (groupid=0, jobs=1): err= 0: pid=329806: Fri Apr 19 04:09:08 2024 00:17:54.510 read: IOPS=4, BW=4134KiB/s (4234kB/s)(57.0MiB/14118msec) 00:17:54.510 slat (usec): min=642, max=2125.2k, avg=210840.83, stdev=608833.88 00:17:54.510 clat (msec): min=2098, max=14114, avg=12323.29, stdev=3188.68 00:17:54.510 lat (msec): min=4197, max=14117, avg=12534.13, stdev=2883.25 00:17:54.510 clat percentiles (msec): 00:17:54.510 | 1.00th=[ 2106], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[10671], 00:17:54.510 | 30.00th=[13892], 40.00th=[14026], 50.00th=[14026], 60.00th=[14026], 00:17:54.510 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14160], 95.00th=[14160], 00:17:54.510 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:17:54.510 | 99.99th=[14160] 00:17:54.510 lat (msec) : >=2000=100.00% 00:17:54.510 cpu : usr=0.00%, sys=0.37%, ctx=95, majf=0, minf=14593 00:17:54.510 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:17:54.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.510 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:54.510 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.510 job1: (groupid=0, jobs=1): err= 0: pid=329808: Fri Apr 19 04:09:08 2024 00:17:54.510 read: IOPS=6, BW=6377KiB/s (6530kB/s)(88.0MiB/14130msec) 00:17:54.510 slat (usec): min=404, max=2126.1k, avg=136610.66, stdev=490894.94 00:17:54.510 clat (msec): min=2107, max=14125, avg=12998.18, stdev=2567.85 00:17:54.510 lat (msec): min=4209, max=14128, avg=13134.79, stdev=2286.10 00:17:54.510 clat percentiles (msec): 00:17:54.510 | 1.00th=[ 2106], 5.00th=[ 6342], 10.00th=[ 8557], 20.00th=[13489], 00:17:54.510 | 30.00th=[13624], 40.00th=[13758], 50.00th=[13892], 60.00th=[14026], 00:17:54.510 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14160], 95.00th=[14160], 00:17:54.510 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:17:54.510 | 99.99th=[14160] 00:17:54.510 lat (msec) : >=2000=100.00% 00:17:54.510 cpu : usr=0.01%, sys=0.47%, ctx=152, majf=0, minf=22529 00:17:54.510 IO depths : 1=1.1%, 2=2.3%, 4=4.5%, 8=9.1%, 16=18.2%, 32=36.4%, >=64=28.4% 00:17:54.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.510 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:54.510 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.510 job1: (groupid=0, jobs=1): err= 0: pid=329809: Fri Apr 19 04:09:08 2024 00:17:54.510 read: IOPS=2, BW=2471KiB/s (2530kB/s)(29.0MiB/12018msec) 00:17:54.510 slat (usec): min=1305, max=3242.7k, avg=412353.13, stdev=907812.84 00:17:54.510 clat (msec): min=58, max=12015, avg=7964.15, stdev=4482.31 00:17:54.510 lat (msec): min=2119, max=12017, avg=8376.51, stdev=4274.32 00:17:54.510 clat percentiles (msec): 00:17:54.510 | 1.00th=[ 59], 5.00th=[ 2123], 10.00th=[ 2123], 20.00th=[ 2165], 00:17:54.510 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[11879], 60.00th=[11879], 00:17:54.510 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:17:54.510 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:54.510 | 99.99th=[12013] 00:17:54.510 lat (msec) : 100=3.45%, >=2000=96.55% 00:17:54.510 cpu : usr=0.00%, sys=0.22%, ctx=88, majf=0, minf=7425 00:17:54.510 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:17:54.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.510 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:54.510 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.510 job1: (groupid=0, jobs=1): err= 0: pid=329810: Fri Apr 19 04:09:08 2024 00:17:54.510 read: IOPS=63, BW=63.9MiB/s (67.0MB/s)(647MiB/10126msec) 00:17:54.510 slat (usec): min=62, max=1872.3k, avg=15504.29, stdev=86803.86 00:17:54.510 clat (msec): min=91, max=3316, avg=1626.60, stdev=818.89 00:17:54.510 lat (msec): min=160, max=3318, avg=1642.10, stdev=821.65 00:17:54.510 clat percentiles (msec): 00:17:54.510 | 1.00th=[ 171], 5.00th=[ 527], 10.00th=[ 827], 20.00th=[ 1062], 00:17:54.510 | 30.00th=[ 1200], 40.00th=[ 1318], 50.00th=[ 1401], 60.00th=[ 1452], 00:17:54.510 | 70.00th=[ 1586], 80.00th=[ 2735], 90.00th=[ 3104], 95.00th=[ 3171], 00:17:54.510 | 99.00th=[ 3239], 99.50th=[ 3272], 99.90th=[ 3306], 99.95th=[ 3306], 00:17:54.510 | 99.99th=[ 3306] 00:17:54.510 bw ( KiB/s): min=63488, max=135168, per=3.04%, avg=88576.00, stdev=22690.32, samples=12 00:17:54.510 iops : min= 62, max= 132, avg=86.50, stdev=22.16, samples=12 00:17:54.510 lat (msec) : 100=0.15%, 250=0.93%, 500=3.86%, 750=3.86%, 1000=3.55% 00:17:54.510 lat (msec) : 2000=65.22%, >=2000=22.41% 00:17:54.510 cpu : usr=0.07%, sys=0.91%, ctx=1174, majf=0, minf=32769 00:17:54.510 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:17:54.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.510 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:54.510 issued rwts: total=647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.510 job1: (groupid=0, jobs=1): err= 0: pid=329811: Fri Apr 19 04:09:08 2024 00:17:54.510 read: IOPS=2, BW=2616KiB/s (2679kB/s)(36.0MiB/14090msec) 00:17:54.510 slat (usec): min=682, max=2129.7k, avg=332545.07, stdev=741707.01 00:17:54.510 clat (msec): min=2117, max=14087, avg=10828.02, stdev=3962.02 00:17:54.510 lat (msec): min=4206, max=14089, avg=11160.57, stdev=3704.03 00:17:54.510 clat percentiles (msec): 00:17:54.510 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:17:54.510 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12818], 60.00th=[14026], 00:17:54.510 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:17:54.510 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:54.510 | 99.99th=[14026] 00:17:54.510 lat (msec) : >=2000=100.00% 00:17:54.510 cpu : usr=0.00%, sys=0.20%, ctx=78, majf=0, minf=9217 00:17:54.510 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:17:54.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.510 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:54.510 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.510 job1: (groupid=0, jobs=1): err= 0: pid=329812: Fri Apr 19 04:09:08 2024 00:17:54.510 read: IOPS=43, BW=43.4MiB/s (45.6MB/s)(610MiB/14041msec) 00:17:54.510 slat (usec): min=342, max=2097.5k, avg=19559.40, stdev=151360.14 00:17:54.510 clat (msec): min=693, max=10198, avg=2778.84, stdev=3539.62 00:17:54.510 lat (msec): min=697, max=10201, avg=2798.40, stdev=3549.07 00:17:54.510 clat percentiles (msec): 00:17:54.510 | 1.00th=[ 701], 5.00th=[ 701], 10.00th=[ 726], 20.00th=[ 818], 00:17:54.510 | 30.00th=[ 877], 40.00th=[ 936], 50.00th=[ 986], 60.00th=[ 1053], 00:17:54.510 | 70.00th=[ 1116], 80.00th=[ 6342], 90.00th=[ 9866], 95.00th=[10000], 00:17:54.510 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:54.510 | 99.99th=[10134] 00:17:54.510 bw ( KiB/s): min= 2052, max=180224, per=2.83%, avg=82430.92, stdev=75310.88, samples=12 00:17:54.510 iops : min= 2, max= 176, avg=80.42, stdev=73.63, samples=12 00:17:54.510 lat (msec) : 750=11.80%, 1000=41.80%, 2000=24.10%, >=2000=22.30% 00:17:54.510 cpu : usr=0.03%, sys=0.66%, ctx=1203, majf=0, minf=32769 00:17:54.510 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:17:54.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.510 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:54.510 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.510 job2: (groupid=0, jobs=1): err= 0: pid=329815: Fri Apr 19 04:09:08 2024 00:17:54.510 read: IOPS=25, BW=26.0MiB/s (27.2MB/s)(364MiB/14008msec) 00:17:54.510 slat (usec): min=55, max=4243.1k, avg=32673.73, stdev=277362.24 00:17:54.510 clat (msec): min=719, max=12820, avg=3732.42, stdev=3822.33 00:17:54.510 lat (msec): min=723, max=12834, avg=3765.10, stdev=3826.38 00:17:54.510 clat percentiles (msec): 00:17:54.510 | 1.00th=[ 726], 5.00th=[ 726], 10.00th=[ 726], 20.00th=[ 735], 00:17:54.510 | 30.00th=[ 760], 40.00th=[ 793], 50.00th=[ 860], 60.00th=[ 2869], 00:17:54.510 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9194], 00:17:54.510 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[12818], 99.95th=[12818], 00:17:54.510 | 99.99th=[12818] 00:17:54.510 bw ( KiB/s): min= 2048, max=180224, per=2.78%, avg=80941.67, stdev=84681.03, samples=6 00:17:54.510 iops : min= 2, max= 176, avg=79.00, stdev=82.66, samples=6 00:17:54.510 lat (msec) : 750=27.47%, 1000=32.14%, >=2000=40.38% 00:17:54.510 cpu : usr=0.01%, sys=0.70%, ctx=353, majf=0, minf=32769 00:17:54.510 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.7% 00:17:54.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.510 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:54.510 issued rwts: total=364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.510 job2: (groupid=0, jobs=1): err= 0: pid=329816: Fri Apr 19 04:09:08 2024 00:17:54.510 read: IOPS=2, BW=2853KiB/s (2922kB/s)(39.0MiB/13997msec) 00:17:54.510 slat (usec): min=1445, max=2148.0k, avg=304746.84, stdev=705092.38 00:17:54.510 clat (msec): min=2110, max=13987, avg=11737.35, stdev=3487.47 00:17:54.510 lat (msec): min=4186, max=13996, avg=12042.10, stdev=3124.53 00:17:54.510 clat percentiles (msec): 00:17:54.510 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4279], 20.00th=[ 8557], 00:17:54.510 | 30.00th=[13087], 40.00th=[13221], 50.00th=[13355], 60.00th=[13489], 00:17:54.510 | 70.00th=[13624], 80.00th=[13758], 90.00th=[13892], 95.00th=[14026], 00:17:54.510 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:54.510 | 99.99th=[14026] 00:17:54.510 lat (msec) : >=2000=100.00% 00:17:54.510 cpu : usr=0.01%, sys=0.16%, ctx=173, majf=0, minf=9985 00:17:54.511 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:17:54.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.511 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:54.511 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.511 job2: (groupid=0, jobs=1): err= 0: pid=329817: Fri Apr 19 04:09:08 2024 00:17:54.511 read: IOPS=4, BW=4904KiB/s (5022kB/s)(57.0MiB/11901msec) 00:17:54.511 slat (usec): min=347, max=2139.2k, avg=175584.08, stdev=555555.16 00:17:54.511 clat (msec): min=1891, max=11878, avg=3180.31, stdev=2649.70 00:17:54.511 lat (msec): min=1963, max=11900, avg=3355.90, stdev=2884.05 00:17:54.511 clat percentiles (msec): 00:17:54.511 | 1.00th=[ 1888], 5.00th=[ 1972], 10.00th=[ 1972], 20.00th=[ 1972], 00:17:54.511 | 30.00th=[ 2056], 40.00th=[ 2072], 50.00th=[ 2089], 60.00th=[ 2198], 00:17:54.511 | 70.00th=[ 2198], 80.00th=[ 2198], 90.00th=[ 6477], 95.00th=[10671], 00:17:54.511 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:17:54.511 | 99.99th=[11879] 00:17:54.511 lat (msec) : 2000=28.07%, >=2000=71.93% 00:17:54.511 cpu : usr=0.00%, sys=0.24%, ctx=87, majf=0, minf=14593 00:17:54.511 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:17:54.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.511 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:54.511 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.511 job2: (groupid=0, jobs=1): err= 0: pid=329818: Fri Apr 19 04:09:08 2024 00:17:54.511 read: IOPS=1, BW=1627KiB/s (1666kB/s)(19.0MiB/11955msec) 00:17:54.511 slat (msec): min=5, max=2112, avg=625.01, stdev=928.73 00:17:54.511 clat (msec): min=79, max=11948, avg=6691.60, stdev=3968.55 00:17:54.511 lat (msec): min=2122, max=11954, avg=7316.61, stdev=3800.87 00:17:54.511 clat percentiles (msec): 00:17:54.511 | 1.00th=[ 80], 5.00th=[ 80], 10.00th=[ 2123], 20.00th=[ 2198], 00:17:54.511 | 30.00th=[ 4279], 40.00th=[ 4279], 50.00th=[ 6409], 60.00th=[ 8557], 00:17:54.511 | 70.00th=[10671], 80.00th=[11879], 90.00th=[11879], 95.00th=[12013], 00:17:54.511 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:54.511 | 99.99th=[12013] 00:17:54.511 lat (msec) : 100=5.26%, >=2000=94.74% 00:17:54.511 cpu : usr=0.00%, sys=0.10%, ctx=81, majf=0, minf=4865 00:17:54.511 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:17:54.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.511 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:54.511 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.511 job2: (groupid=0, jobs=1): err= 0: pid=329819: Fri Apr 19 04:09:08 2024 00:17:54.511 read: IOPS=25, BW=25.4MiB/s (26.6MB/s)(306MiB/12061msec) 00:17:54.511 slat (usec): min=105, max=2132.8k, avg=39148.98, stdev=220078.68 00:17:54.511 clat (msec): min=78, max=7958, avg=4661.07, stdev=2461.05 00:17:54.511 lat (msec): min=1366, max=7966, avg=4700.22, stdev=2458.30 00:17:54.511 clat percentiles (msec): 00:17:54.511 | 1.00th=[ 1368], 5.00th=[ 1469], 10.00th=[ 1536], 20.00th=[ 2567], 00:17:54.511 | 30.00th=[ 2802], 40.00th=[ 3171], 50.00th=[ 3540], 60.00th=[ 7080], 00:17:54.511 | 70.00th=[ 7215], 80.00th=[ 7483], 90.00th=[ 7819], 95.00th=[ 7886], 00:17:54.511 | 99.00th=[ 7953], 99.50th=[ 7953], 99.90th=[ 7953], 99.95th=[ 7953], 00:17:54.511 | 99.99th=[ 7953] 00:17:54.511 bw ( KiB/s): min= 2048, max=110592, per=1.56%, avg=45568.00, stdev=42252.52, samples=8 00:17:54.511 iops : min= 2, max= 108, avg=44.50, stdev=41.26, samples=8 00:17:54.511 lat (msec) : 100=0.33%, 2000=13.73%, >=2000=85.95% 00:17:54.511 cpu : usr=0.02%, sys=0.96%, ctx=701, majf=0, minf=32769 00:17:54.511 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.5%, >=64=79.4% 00:17:54.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.511 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:54.511 issued rwts: total=306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.511 job2: (groupid=0, jobs=1): err= 0: pid=329820: Fri Apr 19 04:09:08 2024 00:17:54.511 read: IOPS=2, BW=2908KiB/s (2978kB/s)(40.0MiB/14085msec) 00:17:54.511 slat (usec): min=824, max=2169.0k, avg=299191.77, stdev=715028.87 00:17:54.511 clat (msec): min=2116, max=14081, avg=11962.31, stdev=3745.74 00:17:54.511 lat (msec): min=4198, max=14084, avg=12261.50, stdev=3401.30 00:17:54.511 clat percentiles (msec): 00:17:54.511 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 8490], 00:17:54.511 | 30.00th=[12818], 40.00th=[14026], 50.00th=[14026], 60.00th=[14026], 00:17:54.511 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:17:54.511 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:54.511 | 99.99th=[14026] 00:17:54.511 lat (msec) : >=2000=100.00% 00:17:54.511 cpu : usr=0.00%, sys=0.23%, ctx=98, majf=0, minf=10241 00:17:54.511 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:17:54.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.511 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:54.511 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.511 job2: (groupid=0, jobs=1): err= 0: pid=329821: Fri Apr 19 04:09:08 2024 00:17:54.511 read: IOPS=19, BW=19.4MiB/s (20.4MB/s)(234MiB/12035msec) 00:17:54.511 slat (usec): min=614, max=2129.1k, avg=51088.58, stdev=272810.36 00:17:54.511 clat (msec): min=78, max=8144, avg=5228.85, stdev=2312.97 00:17:54.511 lat (msec): min=1538, max=8157, avg=5279.94, stdev=2299.88 00:17:54.511 clat percentiles (msec): 00:17:54.511 | 1.00th=[ 1536], 5.00th=[ 2702], 10.00th=[ 2869], 20.00th=[ 3138], 00:17:54.511 | 30.00th=[ 3440], 40.00th=[ 3675], 50.00th=[ 3977], 60.00th=[ 6409], 00:17:54.511 | 70.00th=[ 7886], 80.00th=[ 7953], 90.00th=[ 8020], 95.00th=[ 8087], 00:17:54.511 | 99.00th=[ 8154], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:17:54.511 | 99.99th=[ 8154] 00:17:54.511 bw ( KiB/s): min= 8192, max=83968, per=1.86%, avg=54272.00, stdev=33212.98, samples=4 00:17:54.511 iops : min= 8, max= 82, avg=53.00, stdev=32.43, samples=4 00:17:54.511 lat (msec) : 100=0.43%, 2000=0.85%, >=2000=98.72% 00:17:54.511 cpu : usr=0.00%, sys=0.94%, ctx=499, majf=0, minf=32769 00:17:54.511 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.7%, >=64=73.1% 00:17:54.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.511 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:17:54.511 issued rwts: total=234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.511 job2: (groupid=0, jobs=1): err= 0: pid=329822: Fri Apr 19 04:09:08 2024 00:17:54.511 read: IOPS=12, BW=12.5MiB/s (13.1MB/s)(150MiB/11962msec) 00:17:54.511 slat (usec): min=621, max=2150.4k, avg=79211.00, stdev=364510.08 00:17:54.511 clat (msec): min=78, max=11728, avg=9572.41, stdev=2695.41 00:17:54.511 lat (msec): min=2127, max=11739, avg=9651.62, stdev=2583.75 00:17:54.511 clat percentiles (msec): 00:17:54.511 | 1.00th=[ 2123], 5.00th=[ 4329], 10.00th=[ 5604], 20.00th=[ 6208], 00:17:54.511 | 30.00th=[ 8557], 40.00th=[10805], 50.00th=[11073], 60.00th=[11208], 00:17:54.511 | 70.00th=[11342], 80.00th=[11476], 90.00th=[11610], 95.00th=[11745], 00:17:54.511 | 99.00th=[11745], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:17:54.511 | 99.99th=[11745] 00:17:54.511 bw ( KiB/s): min= 4039, max=28672, per=0.51%, avg=14991.33, stdev=12541.09, samples=3 00:17:54.511 iops : min= 3, max= 28, avg=14.00, stdev=12.77, samples=3 00:17:54.511 lat (msec) : 100=0.67%, >=2000=99.33% 00:17:54.511 cpu : usr=0.01%, sys=0.80%, ctx=275, majf=0, minf=32769 00:17:54.511 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=5.3%, 16=10.7%, 32=21.3%, >=64=58.0% 00:17:54.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.511 complete : 0=0.0%, 4=95.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.2% 00:17:54.511 issued rwts: total=150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.511 job2: (groupid=0, jobs=1): err= 0: pid=329823: Fri Apr 19 04:09:08 2024 00:17:54.511 read: IOPS=82, BW=82.2MiB/s (86.2MB/s)(1160MiB/14117msec) 00:17:54.511 slat (usec): min=46, max=2152.7k, avg=10342.75, stdev=88284.15 00:17:54.511 clat (msec): min=246, max=7218, avg=1427.25, stdev=1926.54 00:17:54.511 lat (msec): min=248, max=7218, avg=1437.60, stdev=1932.81 00:17:54.511 clat percentiles (msec): 00:17:54.511 | 1.00th=[ 326], 5.00th=[ 355], 10.00th=[ 355], 20.00th=[ 372], 00:17:54.511 | 30.00th=[ 397], 40.00th=[ 502], 50.00th=[ 735], 60.00th=[ 860], 00:17:54.511 | 70.00th=[ 1183], 80.00th=[ 1452], 90.00th=[ 6477], 95.00th=[ 6812], 00:17:54.511 | 99.00th=[ 7148], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:17:54.511 | 99.99th=[ 7215] 00:17:54.511 bw ( KiB/s): min= 2052, max=358400, per=5.18%, avg=151113.43, stdev=116955.62, samples=14 00:17:54.511 iops : min= 2, max= 350, avg=147.57, stdev=114.21, samples=14 00:17:54.511 lat (msec) : 250=0.17%, 500=39.91%, 750=10.17%, 1000=16.38%, 2000=19.66% 00:17:54.511 lat (msec) : >=2000=13.71% 00:17:54.511 cpu : usr=0.01%, sys=0.98%, ctx=1679, majf=0, minf=32769 00:17:54.511 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.6% 00:17:54.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.511 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.511 issued rwts: total=1160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.511 job2: (groupid=0, jobs=1): err= 0: pid=329824: Fri Apr 19 04:09:08 2024 00:17:54.511 read: IOPS=146, BW=146MiB/s (153MB/s)(2053MiB/14054msec) 00:17:54.511 slat (usec): min=30, max=2003.5k, avg=5810.93, stdev=57459.49 00:17:54.511 clat (msec): min=303, max=6467, avg=841.20, stdev=1230.22 00:17:54.511 lat (msec): min=305, max=6473, avg=847.01, stdev=1234.56 00:17:54.511 clat percentiles (msec): 00:17:54.511 | 1.00th=[ 330], 5.00th=[ 384], 10.00th=[ 401], 20.00th=[ 430], 00:17:54.511 | 30.00th=[ 451], 40.00th=[ 477], 50.00th=[ 506], 60.00th=[ 535], 00:17:54.511 | 70.00th=[ 575], 80.00th=[ 617], 90.00th=[ 693], 95.00th=[ 4212], 00:17:54.511 | 99.00th=[ 6074], 99.50th=[ 6141], 99.90th=[ 6477], 99.95th=[ 6477], 00:17:54.511 | 99.99th=[ 6477] 00:17:54.511 bw ( KiB/s): min= 2052, max=327680, per=7.51%, avg=219101.67, stdev=92102.88, samples=18 00:17:54.511 iops : min= 2, max= 320, avg=213.94, stdev=89.92, samples=18 00:17:54.511 lat (msec) : 500=47.74%, 750=43.84%, 1000=0.34%, 2000=0.54%, >=2000=7.55% 00:17:54.511 cpu : usr=0.03%, sys=1.08%, ctx=2407, majf=0, minf=32769 00:17:54.511 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:17:54.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.511 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.511 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.511 job2: (groupid=0, jobs=1): err= 0: pid=329825: Fri Apr 19 04:09:08 2024 00:17:54.512 read: IOPS=0, BW=802KiB/s (821kB/s)(11.0MiB/14042msec) 00:17:54.512 slat (msec): min=13, max=4238, avg=1084.55, stdev=1397.64 00:17:54.512 clat (msec): min=2111, max=13927, avg=7852.67, stdev=4024.94 00:17:54.512 lat (msec): min=4201, max=14041, avg=8937.22, stdev=3929.39 00:17:54.512 clat percentiles (msec): 00:17:54.512 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 4212], 20.00th=[ 4245], 00:17:54.512 | 30.00th=[ 4279], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[10671], 00:17:54.512 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12818], 95.00th=[13892], 00:17:54.512 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:17:54.512 | 99.99th=[13892] 00:17:54.512 lat (msec) : >=2000=100.00% 00:17:54.512 cpu : usr=0.00%, sys=0.04%, ctx=64, majf=0, minf=2817 00:17:54.512 IO depths : 1=9.1%, 2=18.2%, 4=36.4%, 8=36.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 issued rwts: total=11,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.512 job2: (groupid=0, jobs=1): err= 0: pid=329826: Fri Apr 19 04:09:08 2024 00:17:54.512 read: IOPS=9, BW=10.00MiB/s (10.5MB/s)(140MiB/14002msec) 00:17:54.512 slat (usec): min=100, max=2098.6k, avg=84926.29, stdev=372574.86 00:17:54.512 clat (msec): min=2111, max=13914, avg=9939.93, stdev=2272.68 00:17:54.512 lat (msec): min=2866, max=14001, avg=10024.86, stdev=2198.75 00:17:54.512 clat percentiles (msec): 00:17:54.512 | 1.00th=[ 2869], 5.00th=[ 2869], 10.00th=[ 6409], 20.00th=[10000], 00:17:54.512 | 30.00th=[10134], 40.00th=[10268], 50.00th=[10268], 60.00th=[10402], 00:17:54.512 | 70.00th=[10537], 80.00th=[10671], 90.00th=[12684], 95.00th=[12684], 00:17:54.512 | 99.00th=[12818], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:17:54.512 | 99.99th=[13892] 00:17:54.512 bw ( KiB/s): min= 2048, max=12288, per=0.17%, avg=5011.00, stdev=4228.49, samples=5 00:17:54.512 iops : min= 2, max= 12, avg= 4.80, stdev= 4.15, samples=5 00:17:54.512 lat (msec) : >=2000=100.00% 00:17:54.512 cpu : usr=0.00%, sys=0.64%, ctx=142, majf=0, minf=32769 00:17:54.512 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.7%, 16=11.4%, 32=22.9%, >=64=55.0% 00:17:54.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 complete : 0=0.0%, 4=92.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=7.1% 00:17:54.512 issued rwts: total=140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.512 job2: (groupid=0, jobs=1): err= 0: pid=329827: Fri Apr 19 04:09:08 2024 00:17:54.512 read: IOPS=91, BW=91.5MiB/s (96.0MB/s)(1095MiB/11964msec) 00:17:54.512 slat (usec): min=66, max=2104.7k, avg=10852.23, stdev=86206.52 00:17:54.512 clat (msec): min=76, max=4652, avg=1263.71, stdev=1118.76 00:17:54.512 lat (msec): min=262, max=4654, avg=1274.56, stdev=1121.51 00:17:54.512 clat percentiles (msec): 00:17:54.512 | 1.00th=[ 264], 5.00th=[ 313], 10.00th=[ 376], 20.00th=[ 567], 00:17:54.512 | 30.00th=[ 625], 40.00th=[ 860], 50.00th=[ 911], 60.00th=[ 1083], 00:17:54.512 | 70.00th=[ 1183], 80.00th=[ 1368], 90.00th=[ 2567], 95.00th=[ 4530], 00:17:54.512 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:17:54.512 | 99.99th=[ 4665] 00:17:54.512 bw ( KiB/s): min= 8192, max=374784, per=4.85%, avg=141450.07, stdev=89403.05, samples=14 00:17:54.512 iops : min= 8, max= 366, avg=138.07, stdev=87.37, samples=14 00:17:54.512 lat (msec) : 100=0.09%, 500=13.52%, 750=20.91%, 1000=21.92%, 2000=29.04% 00:17:54.512 lat (msec) : >=2000=14.52% 00:17:54.512 cpu : usr=0.03%, sys=0.99%, ctx=1940, majf=0, minf=32769 00:17:54.512 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:17:54.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.512 issued rwts: total=1095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.512 job3: (groupid=0, jobs=1): err= 0: pid=329829: Fri Apr 19 04:09:08 2024 00:17:54.512 read: IOPS=4, BW=4854KiB/s (4971kB/s)(67.0MiB/14133msec) 00:17:54.512 slat (usec): min=447, max=2122.9k, avg=179476.22, stdev=560848.71 00:17:54.512 clat (msec): min=2107, max=14130, avg=11341.64, stdev=3730.12 00:17:54.512 lat (msec): min=4185, max=14132, avg=11521.12, stdev=3564.69 00:17:54.512 clat percentiles (msec): 00:17:54.512 | 1.00th=[ 2106], 5.00th=[ 6275], 10.00th=[ 6409], 20.00th=[ 6409], 00:17:54.512 | 30.00th=[ 8557], 40.00th=[13892], 50.00th=[14026], 60.00th=[14026], 00:17:54.512 | 70.00th=[14026], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:17:54.512 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:17:54.512 | 99.99th=[14160] 00:17:54.512 lat (msec) : >=2000=100.00% 00:17:54.512 cpu : usr=0.01%, sys=0.38%, ctx=104, majf=0, minf=17153 00:17:54.512 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=11.9%, 16=23.9%, 32=47.8%, >=64=6.0% 00:17:54.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:54.512 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.512 job3: (groupid=0, jobs=1): err= 0: pid=329830: Fri Apr 19 04:09:08 2024 00:17:54.512 read: IOPS=1, BW=1097KiB/s (1123kB/s)(15.0MiB/14002msec) 00:17:54.512 slat (msec): min=11, max=2113, avg=792.94, stdev=993.73 00:17:54.512 clat (msec): min=2106, max=13984, avg=8895.14, stdev=3878.07 00:17:54.512 lat (msec): min=4185, max=14001, avg=9688.09, stdev=3596.75 00:17:54.512 clat percentiles (msec): 00:17:54.512 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 4178], 20.00th=[ 4245], 00:17:54.512 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[10671], 00:17:54.512 | 70.00th=[10671], 80.00th=[12818], 90.00th=[14026], 95.00th=[14026], 00:17:54.512 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:54.512 | 99.99th=[14026] 00:17:54.512 lat (msec) : >=2000=100.00% 00:17:54.512 cpu : usr=0.00%, sys=0.06%, ctx=70, majf=0, minf=3841 00:17:54.512 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.512 job3: (groupid=0, jobs=1): err= 0: pid=329831: Fri Apr 19 04:09:08 2024 00:17:54.512 read: IOPS=30, BW=30.7MiB/s (32.2MB/s)(430MiB/14004msec) 00:17:54.512 slat (usec): min=351, max=2062.3k, avg=23256.71, stdev=107663.44 00:17:54.512 clat (msec): min=1663, max=9015, avg=3774.80, stdev=2723.64 00:17:54.512 lat (msec): min=1737, max=9021, avg=3798.06, stdev=2728.20 00:17:54.512 clat percentiles (msec): 00:17:54.512 | 1.00th=[ 1737], 5.00th=[ 1754], 10.00th=[ 1821], 20.00th=[ 1921], 00:17:54.512 | 30.00th=[ 1972], 40.00th=[ 2005], 50.00th=[ 2056], 60.00th=[ 2140], 00:17:54.512 | 70.00th=[ 4245], 80.00th=[ 7617], 90.00th=[ 8490], 95.00th=[ 8792], 00:17:54.512 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:17:54.512 | 99.99th=[ 9060] 00:17:54.512 bw ( KiB/s): min= 8192, max=81920, per=1.77%, avg=51675.75, stdev=21955.58, samples=12 00:17:54.512 iops : min= 8, max= 80, avg=50.33, stdev=21.53, samples=12 00:17:54.512 lat (msec) : 2000=39.30%, >=2000=60.70% 00:17:54.512 cpu : usr=0.00%, sys=0.64%, ctx=1145, majf=0, minf=32769 00:17:54.512 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.4%, >=64=85.3% 00:17:54.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:54.512 issued rwts: total=430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.512 job3: (groupid=0, jobs=1): err= 0: pid=329832: Fri Apr 19 04:09:08 2024 00:17:54.512 read: IOPS=2, BW=2330KiB/s (2386kB/s)(32.0MiB/14062msec) 00:17:54.512 slat (usec): min=835, max=2160.0k, avg=373424.97, stdev=797816.08 00:17:54.512 clat (msec): min=2112, max=14060, avg=12796.25, stdev=3066.43 00:17:54.512 lat (msec): min=4248, max=14061, avg=13169.68, stdev=2372.43 00:17:54.512 clat percentiles (msec): 00:17:54.512 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 8490], 20.00th=[13892], 00:17:54.512 | 30.00th=[14026], 40.00th=[14026], 50.00th=[14026], 60.00th=[14026], 00:17:54.512 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:17:54.512 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:54.512 | 99.99th=[14026] 00:17:54.512 lat (msec) : >=2000=100.00% 00:17:54.512 cpu : usr=0.00%, sys=0.18%, ctx=96, majf=0, minf=8193 00:17:54.512 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:17:54.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:54.512 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.512 job3: (groupid=0, jobs=1): err= 0: pid=329833: Fri Apr 19 04:09:08 2024 00:17:54.512 read: IOPS=53, BW=53.4MiB/s (56.0MB/s)(755MiB/14143msec) 00:17:54.512 slat (usec): min=534, max=2939.4k, avg=15933.23, stdev=130786.86 00:17:54.512 clat (msec): min=628, max=7856, avg=2244.03, stdev=2357.22 00:17:54.512 lat (msec): min=636, max=7858, avg=2259.96, stdev=2362.43 00:17:54.512 clat percentiles (msec): 00:17:54.512 | 1.00th=[ 634], 5.00th=[ 651], 10.00th=[ 827], 20.00th=[ 1083], 00:17:54.512 | 30.00th=[ 1133], 40.00th=[ 1200], 50.00th=[ 1234], 60.00th=[ 1301], 00:17:54.512 | 70.00th=[ 1385], 80.00th=[ 1821], 90.00th=[ 7416], 95.00th=[ 7550], 00:17:54.512 | 99.00th=[ 7819], 99.50th=[ 7819], 99.90th=[ 7886], 99.95th=[ 7886], 00:17:54.512 | 99.99th=[ 7886] 00:17:54.512 bw ( KiB/s): min= 2052, max=190464, per=3.39%, avg=98934.46, stdev=56287.67, samples=13 00:17:54.512 iops : min= 2, max= 186, avg=96.62, stdev=54.97, samples=13 00:17:54.512 lat (msec) : 750=8.87%, 1000=3.31%, 2000=70.46%, >=2000=17.35% 00:17:54.512 cpu : usr=0.01%, sys=1.05%, ctx=1612, majf=0, minf=32769 00:17:54.512 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:17:54.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:54.512 issued rwts: total=755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.512 job3: (groupid=0, jobs=1): err= 0: pid=329834: Fri Apr 19 04:09:08 2024 00:17:54.512 read: IOPS=15, BW=15.3MiB/s (16.0MB/s)(216MiB/14136msec) 00:17:54.512 slat (usec): min=478, max=4252.1k, avg=55619.85, stdev=363618.46 00:17:54.512 clat (msec): min=1162, max=13162, avg=7958.24, stdev=5401.04 00:17:54.512 lat (msec): min=1178, max=13189, avg=8013.86, stdev=5392.73 00:17:54.512 clat percentiles (msec): 00:17:54.512 | 1.00th=[ 1183], 5.00th=[ 1234], 10.00th=[ 1284], 20.00th=[ 1452], 00:17:54.512 | 30.00th=[ 1536], 40.00th=[ 6342], 50.00th=[12147], 60.00th=[12416], 00:17:54.512 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12953], 95.00th=[13087], 00:17:54.512 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13221], 99.95th=[13221], 00:17:54.512 | 99.99th=[13221] 00:17:54.512 bw ( KiB/s): min= 2048, max=133120, per=0.89%, avg=26039.43, stdev=47791.90, samples=7 00:17:54.512 iops : min= 2, max= 130, avg=25.43, stdev=46.67, samples=7 00:17:54.512 lat (msec) : 2000=38.43%, >=2000=61.57% 00:17:54.512 cpu : usr=0.00%, sys=0.68%, ctx=445, majf=0, minf=32769 00:17:54.512 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.7%, 16=7.4%, 32=14.8%, >=64=70.8% 00:17:54.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:17:54.512 issued rwts: total=216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.512 job3: (groupid=0, jobs=1): err= 0: pid=329835: Fri Apr 19 04:09:08 2024 00:17:54.512 read: IOPS=30, BW=30.6MiB/s (32.0MB/s)(429MiB/14036msec) 00:17:54.512 slat (usec): min=651, max=2186.9k, avg=27805.61, stdev=147459.40 00:17:54.512 clat (msec): min=1461, max=8943, avg=3778.28, stdev=2524.61 00:17:54.512 lat (msec): min=1471, max=9016, avg=3806.08, stdev=2528.78 00:17:54.512 clat percentiles (msec): 00:17:54.512 | 1.00th=[ 1469], 5.00th=[ 1519], 10.00th=[ 1569], 20.00th=[ 1720], 00:17:54.512 | 30.00th=[ 2072], 40.00th=[ 2232], 50.00th=[ 2668], 60.00th=[ 2802], 00:17:54.512 | 70.00th=[ 3037], 80.00th=[ 6946], 90.00th=[ 8020], 95.00th=[ 8490], 00:17:54.512 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:17:54.512 | 99.99th=[ 8926] 00:17:54.512 bw ( KiB/s): min= 2019, max=86016, per=1.51%, avg=44168.00, stdev=24930.65, samples=14 00:17:54.512 iops : min= 1, max= 84, avg=42.93, stdev=24.56, samples=14 00:17:54.512 lat (msec) : 2000=25.41%, >=2000=74.59% 00:17:54.512 cpu : usr=0.04%, sys=0.68%, ctx=1234, majf=0, minf=32769 00:17:54.512 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.5%, >=64=85.3% 00:17:54.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.512 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:54.512 issued rwts: total=429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.512 job3: (groupid=0, jobs=1): err= 0: pid=329836: Fri Apr 19 04:09:08 2024 00:17:54.512 read: IOPS=19, BW=19.6MiB/s (20.5MB/s)(274MiB/13993msec) 00:17:54.512 slat (usec): min=45, max=2110.2k, avg=43331.46, stdev=278401.39 00:17:54.512 clat (msec): min=591, max=13270, avg=6338.20, stdev=5562.70 00:17:54.512 lat (msec): min=592, max=13271, avg=6381.53, stdev=5570.38 00:17:54.512 clat percentiles (msec): 00:17:54.512 | 1.00th=[ 592], 5.00th=[ 592], 10.00th=[ 592], 20.00th=[ 600], 00:17:54.513 | 30.00th=[ 634], 40.00th=[ 651], 50.00th=[ 4866], 60.00th=[ 9060], 00:17:54.513 | 70.00th=[12953], 80.00th=[12953], 90.00th=[13087], 95.00th=[13221], 00:17:54.513 | 99.00th=[13221], 99.50th=[13221], 99.90th=[13221], 99.95th=[13221], 00:17:54.513 | 99.99th=[13221] 00:17:54.513 bw ( KiB/s): min= 2052, max=149504, per=1.29%, avg=37632.50, stdev=52212.85, samples=8 00:17:54.513 iops : min= 2, max= 146, avg=36.75, stdev=50.99, samples=8 00:17:54.513 lat (msec) : 750=41.61%, >=2000=58.39% 00:17:54.513 cpu : usr=0.01%, sys=0.69%, ctx=220, majf=0, minf=32769 00:17:54.513 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.8%, 32=11.7%, >=64=77.0% 00:17:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.513 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:17:54.513 issued rwts: total=274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.513 job3: (groupid=0, jobs=1): err= 0: pid=329837: Fri Apr 19 04:09:08 2024 00:17:54.513 read: IOPS=152, BW=153MiB/s (160MB/s)(1541MiB/10102msec) 00:17:54.513 slat (usec): min=29, max=97627, avg=6491.68, stdev=12973.61 00:17:54.513 clat (msec): min=93, max=1645, avg=804.26, stdev=309.40 00:17:54.513 lat (msec): min=109, max=1675, avg=810.75, stdev=311.20 00:17:54.513 clat percentiles (msec): 00:17:54.513 | 1.00th=[ 226], 5.00th=[ 401], 10.00th=[ 502], 20.00th=[ 575], 00:17:54.513 | 30.00th=[ 642], 40.00th=[ 693], 50.00th=[ 726], 60.00th=[ 768], 00:17:54.513 | 70.00th=[ 852], 80.00th=[ 995], 90.00th=[ 1351], 95.00th=[ 1485], 00:17:54.513 | 99.00th=[ 1586], 99.50th=[ 1603], 99.90th=[ 1636], 99.95th=[ 1653], 00:17:54.513 | 99.99th=[ 1653] 00:17:54.513 bw ( KiB/s): min=53141, max=301056, per=5.23%, avg=152385.26, stdev=64004.45, samples=19 00:17:54.513 iops : min= 51, max= 294, avg=148.63, stdev=62.60, samples=19 00:17:54.513 lat (msec) : 100=0.06%, 250=0.97%, 500=8.96%, 750=46.85%, 1000=23.43% 00:17:54.513 lat (msec) : 2000=19.73% 00:17:54.513 cpu : usr=0.05%, sys=1.54%, ctx=1692, majf=0, minf=32769 00:17:54.513 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:17:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.513 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.513 issued rwts: total=1541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.513 job3: (groupid=0, jobs=1): err= 0: pid=329838: Fri Apr 19 04:09:08 2024 00:17:54.513 read: IOPS=126, BW=127MiB/s (133MB/s)(1280MiB/10085msec) 00:17:54.513 slat (usec): min=41, max=88276, avg=7812.18, stdev=12864.62 00:17:54.513 clat (msec): min=80, max=2362, avg=957.10, stdev=500.45 00:17:54.513 lat (msec): min=94, max=2398, avg=964.92, stdev=503.53 00:17:54.513 clat percentiles (msec): 00:17:54.513 | 1.00th=[ 251], 5.00th=[ 493], 10.00th=[ 527], 20.00th=[ 584], 00:17:54.513 | 30.00th=[ 625], 40.00th=[ 718], 50.00th=[ 860], 60.00th=[ 902], 00:17:54.513 | 70.00th=[ 953], 80.00th=[ 1217], 90.00th=[ 1888], 95.00th=[ 2123], 00:17:54.513 | 99.00th=[ 2333], 99.50th=[ 2366], 99.90th=[ 2366], 99.95th=[ 2366], 00:17:54.513 | 99.99th=[ 2366] 00:17:54.513 bw ( KiB/s): min=22528, max=237568, per=4.26%, avg=124092.53, stdev=66522.79, samples=19 00:17:54.513 iops : min= 22, max= 232, avg=121.00, stdev=65.07, samples=19 00:17:54.513 lat (msec) : 100=0.16%, 250=0.78%, 500=4.92%, 750=36.25%, 1000=32.66% 00:17:54.513 lat (msec) : 2000=16.88%, >=2000=8.36% 00:17:54.513 cpu : usr=0.04%, sys=1.52%, ctx=1549, majf=0, minf=32769 00:17:54.513 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:17:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.513 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.513 issued rwts: total=1280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.513 job3: (groupid=0, jobs=1): err= 0: pid=329839: Fri Apr 19 04:09:08 2024 00:17:54.513 read: IOPS=4, BW=4436KiB/s (4543kB/s)(61.0MiB/14080msec) 00:17:54.513 slat (usec): min=1204, max=2117.0k, avg=196206.64, stdev=576729.44 00:17:54.513 clat (msec): min=2111, max=14078, avg=12504.20, stdev=2727.18 00:17:54.513 lat (msec): min=4194, max=14079, avg=12700.40, stdev=2374.76 00:17:54.513 clat percentiles (msec): 00:17:54.513 | 1.00th=[ 2106], 5.00th=[ 6409], 10.00th=[ 8490], 20.00th=[12818], 00:17:54.513 | 30.00th=[13355], 40.00th=[13489], 50.00th=[13624], 60.00th=[13624], 00:17:54.513 | 70.00th=[13758], 80.00th=[13892], 90.00th=[14026], 95.00th=[14026], 00:17:54.513 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:17:54.513 | 99.99th=[14026] 00:17:54.513 lat (msec) : >=2000=100.00% 00:17:54.513 cpu : usr=0.00%, sys=0.30%, ctx=180, majf=0, minf=15617 00:17:54.513 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:17:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.513 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:54.513 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.513 job3: (groupid=0, jobs=1): err= 0: pid=329840: Fri Apr 19 04:09:08 2024 00:17:54.513 read: IOPS=76, BW=76.5MiB/s (80.2MB/s)(772MiB/10096msec) 00:17:54.513 slat (usec): min=31, max=202867, avg=12989.84, stdev=22054.48 00:17:54.513 clat (msec): min=64, max=3865, avg=1488.03, stdev=1052.38 00:17:54.513 lat (msec): min=107, max=3875, avg=1501.02, stdev=1058.22 00:17:54.513 clat percentiles (msec): 00:17:54.513 | 1.00th=[ 153], 5.00th=[ 368], 10.00th=[ 592], 20.00th=[ 768], 00:17:54.513 | 30.00th=[ 776], 40.00th=[ 810], 50.00th=[ 877], 60.00th=[ 1418], 00:17:54.513 | 70.00th=[ 1838], 80.00th=[ 2232], 90.00th=[ 3540], 95.00th=[ 3742], 00:17:54.513 | 99.00th=[ 3809], 99.50th=[ 3842], 99.90th=[ 3876], 99.95th=[ 3876], 00:17:54.513 | 99.99th=[ 3876] 00:17:54.513 bw ( KiB/s): min=10240, max=169984, per=2.83%, avg=82414.06, stdev=57418.75, samples=16 00:17:54.513 iops : min= 10, max= 166, avg=80.44, stdev=56.02, samples=16 00:17:54.513 lat (msec) : 100=0.13%, 250=1.94%, 500=5.57%, 750=7.64%, 1000=37.82% 00:17:54.513 lat (msec) : 2000=21.89%, >=2000=25.00% 00:17:54.513 cpu : usr=0.02%, sys=1.25%, ctx=1587, majf=0, minf=32769 00:17:54.513 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.8% 00:17:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.513 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:54.513 issued rwts: total=772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.513 job3: (groupid=0, jobs=1): err= 0: pid=329841: Fri Apr 19 04:09:08 2024 00:17:54.513 read: IOPS=104, BW=104MiB/s (109MB/s)(1053MiB/10100msec) 00:17:54.513 slat (usec): min=38, max=1850.5k, avg=9497.07, stdev=57612.14 00:17:54.513 clat (msec): min=94, max=3524, avg=1171.19, stdev=739.15 00:17:54.513 lat (msec): min=122, max=3544, avg=1180.69, stdev=740.58 00:17:54.513 clat percentiles (msec): 00:17:54.513 | 1.00th=[ 284], 5.00th=[ 651], 10.00th=[ 667], 20.00th=[ 709], 00:17:54.513 | 30.00th=[ 776], 40.00th=[ 860], 50.00th=[ 936], 60.00th=[ 995], 00:17:54.513 | 70.00th=[ 1083], 80.00th=[ 1318], 90.00th=[ 2534], 95.00th=[ 3239], 00:17:54.513 | 99.00th=[ 3440], 99.50th=[ 3473], 99.90th=[ 3473], 99.95th=[ 3540], 00:17:54.513 | 99.99th=[ 3540] 00:17:54.513 bw ( KiB/s): min=30476, max=188416, per=4.06%, avg=118506.06, stdev=50313.91, samples=16 00:17:54.513 iops : min= 29, max= 184, avg=115.62, stdev=49.30, samples=16 00:17:54.513 lat (msec) : 100=0.09%, 250=0.76%, 500=0.57%, 750=26.59%, 1000=32.57% 00:17:54.513 lat (msec) : 2000=27.35%, >=2000=12.06% 00:17:54.513 cpu : usr=0.08%, sys=1.41%, ctx=1718, majf=0, minf=32769 00:17:54.513 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:17:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.513 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.513 issued rwts: total=1053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.513 job4: (groupid=0, jobs=1): err= 0: pid=329846: Fri Apr 19 04:09:08 2024 00:17:54.513 read: IOPS=2, BW=2656KiB/s (2720kB/s)(31.0MiB/11950msec) 00:17:54.513 slat (usec): min=449, max=2130.2k, avg=382886.67, stdev=791479.03 00:17:54.513 clat (msec): min=80, max=11948, avg=9664.98, stdev=3663.54 00:17:54.513 lat (msec): min=2154, max=11949, avg=10047.87, stdev=3222.07 00:17:54.513 clat percentiles (msec): 00:17:54.513 | 1.00th=[ 81], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6477], 00:17:54.513 | 30.00th=[10671], 40.00th=[11879], 50.00th=[11879], 60.00th=[11879], 00:17:54.513 | 70.00th=[11879], 80.00th=[11879], 90.00th=[11879], 95.00th=[12013], 00:17:54.513 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:54.513 | 99.99th=[12013] 00:17:54.513 lat (msec) : 100=3.23%, >=2000=96.77% 00:17:54.513 cpu : usr=0.00%, sys=0.18%, ctx=80, majf=0, minf=7937 00:17:54.513 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:17:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.513 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:17:54.513 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.513 job4: (groupid=0, jobs=1): err= 0: pid=329847: Fri Apr 19 04:09:08 2024 00:17:54.513 read: IOPS=39, BW=39.5MiB/s (41.4MB/s)(396MiB/10029msec) 00:17:54.513 slat (usec): min=47, max=2153.5k, avg=25253.25, stdev=185806.16 00:17:54.513 clat (msec): min=26, max=5071, avg=2326.19, stdev=1947.28 00:17:54.513 lat (msec): min=39, max=5074, avg=2351.44, stdev=1948.20 00:17:54.513 clat percentiles (msec): 00:17:54.513 | 1.00th=[ 43], 5.00th=[ 100], 10.00th=[ 292], 20.00th=[ 735], 00:17:54.513 | 30.00th=[ 735], 40.00th=[ 776], 50.00th=[ 961], 60.00th=[ 3104], 00:17:54.513 | 70.00th=[ 4597], 80.00th=[ 4799], 90.00th=[ 5000], 95.00th=[ 5067], 00:17:54.513 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:17:54.513 | 99.99th=[ 5067] 00:17:54.513 bw ( KiB/s): min=12288, max=186368, per=3.78%, avg=110182.40, stdev=67357.10, samples=5 00:17:54.513 iops : min= 12, max= 182, avg=107.60, stdev=65.78, samples=5 00:17:54.513 lat (msec) : 50=1.52%, 100=3.54%, 250=4.29%, 500=1.01%, 750=22.98% 00:17:54.513 lat (msec) : 1000=17.42%, 2000=5.56%, >=2000=43.69% 00:17:54.513 cpu : usr=0.00%, sys=1.09%, ctx=473, majf=0, minf=32769 00:17:54.513 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.1%, >=64=84.1% 00:17:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.513 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:54.513 issued rwts: total=396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.513 job4: (groupid=0, jobs=1): err= 0: pid=329848: Fri Apr 19 04:09:08 2024 00:17:54.513 read: IOPS=11, BW=11.9MiB/s (12.5MB/s)(120MiB/10058msec) 00:17:54.513 slat (usec): min=495, max=2109.1k, avg=83465.99, stdev=351218.37 00:17:54.513 clat (msec): min=41, max=10056, avg=4270.97, stdev=3774.65 00:17:54.513 lat (msec): min=70, max=10057, avg=4354.43, stdev=3791.05 00:17:54.513 clat percentiles (msec): 00:17:54.513 | 1.00th=[ 71], 5.00th=[ 83], 10.00th=[ 1250], 20.00th=[ 1452], 00:17:54.513 | 30.00th=[ 1620], 40.00th=[ 1754], 50.00th=[ 2022], 60.00th=[ 2232], 00:17:54.513 | 70.00th=[ 6611], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10000], 00:17:54.513 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:17:54.513 | 99.99th=[10000] 00:17:54.513 lat (msec) : 50=0.83%, 100=4.17%, 250=4.17%, 2000=39.17%, >=2000=51.67% 00:17:54.513 cpu : usr=0.00%, sys=0.71%, ctx=297, majf=0, minf=30721 00:17:54.513 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.7%, 16=13.3%, 32=26.7%, >=64=47.5% 00:17:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.513 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:54.513 issued rwts: total=120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.513 job4: (groupid=0, jobs=1): err= 0: pid=329849: Fri Apr 19 04:09:08 2024 00:17:54.513 read: IOPS=14, BW=14.6MiB/s (15.4MB/s)(177MiB/12088msec) 00:17:54.513 slat (usec): min=82, max=2093.9k, avg=67772.35, stdev=342414.48 00:17:54.513 clat (msec): min=91, max=11657, avg=8290.84, stdev=3412.95 00:17:54.513 lat (msec): min=1342, max=11664, avg=8358.61, stdev=3365.17 00:17:54.513 clat percentiles (msec): 00:17:54.513 | 1.00th=[ 1334], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:17:54.513 | 30.00th=[ 6477], 40.00th=[ 7617], 50.00th=[ 9597], 60.00th=[10805], 00:17:54.513 | 70.00th=[11208], 80.00th=[11342], 90.00th=[11476], 95.00th=[11610], 00:17:54.513 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:17:54.513 | 99.99th=[11610] 00:17:54.513 bw ( KiB/s): min= 2048, max=26624, per=0.50%, avg=14629.14, stdev=10268.40, samples=7 00:17:54.513 iops : min= 2, max= 26, avg=14.29, stdev=10.03, samples=7 00:17:54.513 lat (msec) : 100=0.56%, 2000=2.26%, >=2000=97.18% 00:17:54.513 cpu : usr=0.00%, sys=0.73%, ctx=208, majf=0, minf=32769 00:17:54.513 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.5%, 16=9.0%, 32=18.1%, >=64=64.4% 00:17:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.513 complete : 0=0.0%, 4=98.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.0% 00:17:54.513 issued rwts: total=177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.513 job4: (groupid=0, jobs=1): err= 0: pid=329850: Fri Apr 19 04:09:08 2024 00:17:54.513 read: IOPS=62, BW=62.1MiB/s (65.1MB/s)(625MiB/10060msec) 00:17:54.513 slat (usec): min=52, max=2087.7k, avg=16029.55, stdev=122859.45 00:17:54.513 clat (msec): min=39, max=4235, avg=1431.89, stdev=1121.16 00:17:54.513 lat (msec): min=68, max=4239, avg=1447.92, stdev=1129.62 00:17:54.513 clat percentiles (msec): 00:17:54.513 | 1.00th=[ 89], 5.00th=[ 253], 10.00th=[ 430], 20.00th=[ 726], 00:17:54.513 | 30.00th=[ 760], 40.00th=[ 793], 50.00th=[ 835], 60.00th=[ 1250], 00:17:54.514 | 70.00th=[ 1334], 80.00th=[ 3138], 90.00th=[ 3272], 95.00th=[ 3339], 00:17:54.514 | 99.00th=[ 4212], 99.50th=[ 4212], 99.90th=[ 4245], 99.95th=[ 4245], 00:17:54.514 | 99.99th=[ 4245] 00:17:54.514 bw ( KiB/s): min=51200, max=182272, per=4.36%, avg=127210.50, stdev=45560.27, samples=8 00:17:54.514 iops : min= 50, max= 178, avg=124.12, stdev=44.60, samples=8 00:17:54.514 lat (msec) : 50=0.16%, 100=1.28%, 250=3.52%, 500=7.20%, 750=15.20% 00:17:54.514 lat (msec) : 1000=31.20%, 2000=16.32%, >=2000=25.12% 00:17:54.514 cpu : usr=0.00%, sys=1.06%, ctx=758, majf=0, minf=32769 00:17:54.514 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:17:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.514 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:54.514 issued rwts: total=625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.514 job4: (groupid=0, jobs=1): err= 0: pid=329852: Fri Apr 19 04:09:08 2024 00:17:54.514 read: IOPS=4, BW=4169KiB/s (4269kB/s)(49.0MiB/12036msec) 00:17:54.514 slat (usec): min=722, max=2136.2k, avg=243936.89, stdev=647993.56 00:17:54.514 clat (msec): min=82, max=12033, avg=9715.30, stdev=3878.41 00:17:54.514 lat (msec): min=2121, max=12035, avg=9959.24, stdev=3627.73 00:17:54.514 clat percentiles (msec): 00:17:54.514 | 1.00th=[ 83], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:17:54.514 | 30.00th=[10671], 40.00th=[11879], 50.00th=[12013], 60.00th=[12013], 00:17:54.514 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:17:54.514 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:54.514 | 99.99th=[12013] 00:17:54.514 lat (msec) : 100=2.04%, >=2000=97.96% 00:17:54.514 cpu : usr=0.00%, sys=0.39%, ctx=113, majf=0, minf=12545 00:17:54.514 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:17:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.514 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:17:54.514 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.514 job4: (groupid=0, jobs=1): err= 0: pid=329853: Fri Apr 19 04:09:08 2024 00:17:54.514 read: IOPS=6, BW=6837KiB/s (7001kB/s)(80.0MiB/11982msec) 00:17:54.514 slat (usec): min=487, max=2129.8k, avg=148758.61, stdev=504131.52 00:17:54.514 clat (msec): min=80, max=11965, avg=6564.47, stdev=2087.90 00:17:54.514 lat (msec): min=2161, max=11981, avg=6713.23, stdev=2043.54 00:17:54.514 clat percentiles (msec): 00:17:54.514 | 1.00th=[ 81], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 6074], 00:17:54.514 | 30.00th=[ 6208], 40.00th=[ 6208], 50.00th=[ 6342], 60.00th=[ 6342], 00:17:54.514 | 70.00th=[ 6477], 80.00th=[ 6477], 90.00th=[ 8658], 95.00th=[11879], 00:17:54.514 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:54.514 | 99.99th=[12013] 00:17:54.514 lat (msec) : 100=1.25%, >=2000=98.75% 00:17:54.514 cpu : usr=0.00%, sys=0.44%, ctx=109, majf=0, minf=20481 00:17:54.514 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.0%, 32=40.0%, >=64=21.3% 00:17:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.514 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:54.514 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.514 job4: (groupid=0, jobs=1): err= 0: pid=329854: Fri Apr 19 04:09:08 2024 00:17:54.514 read: IOPS=90, BW=90.6MiB/s (95.0MB/s)(916MiB/10108msec) 00:17:54.514 slat (usec): min=45, max=2039.4k, avg=10925.82, stdev=69212.48 00:17:54.514 clat (msec): min=95, max=3254, avg=1315.67, stdev=796.13 00:17:54.514 lat (msec): min=148, max=3264, avg=1326.60, stdev=798.81 00:17:54.514 clat percentiles (msec): 00:17:54.514 | 1.00th=[ 192], 5.00th=[ 535], 10.00th=[ 827], 20.00th=[ 860], 00:17:54.514 | 30.00th=[ 885], 40.00th=[ 911], 50.00th=[ 927], 60.00th=[ 978], 00:17:54.514 | 70.00th=[ 1301], 80.00th=[ 1754], 90.00th=[ 3071], 95.00th=[ 3104], 00:17:54.514 | 99.00th=[ 3205], 99.50th=[ 3205], 99.90th=[ 3239], 99.95th=[ 3239], 00:17:54.514 | 99.99th=[ 3239] 00:17:54.514 bw ( KiB/s): min=26624, max=157696, per=3.69%, avg=107692.33, stdev=45486.18, samples=15 00:17:54.514 iops : min= 26, max= 154, avg=105.07, stdev=44.39, samples=15 00:17:54.514 lat (msec) : 100=0.11%, 250=1.09%, 500=3.49%, 750=2.73%, 1000=54.48% 00:17:54.514 lat (msec) : 2000=23.69%, >=2000=14.41% 00:17:54.514 cpu : usr=0.04%, sys=1.36%, ctx=1162, majf=0, minf=32769 00:17:54.514 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.1% 00:17:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.514 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.514 issued rwts: total=916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.514 job4: (groupid=0, jobs=1): err= 0: pid=329855: Fri Apr 19 04:09:08 2024 00:17:54.514 read: IOPS=8, BW=8476KiB/s (8680kB/s)(83.0MiB/10027msec) 00:17:54.514 slat (usec): min=372, max=2107.7k, avg=120589.25, stdev=463269.84 00:17:54.514 clat (msec): min=16, max=8850, avg=1755.24, stdev=2899.17 00:17:54.514 lat (msec): min=27, max=10025, avg=1875.83, stdev=3031.14 00:17:54.514 clat percentiles (msec): 00:17:54.514 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 44], 00:17:54.514 | 30.00th=[ 72], 40.00th=[ 112], 50.00th=[ 155], 60.00th=[ 201], 00:17:54.514 | 70.00th=[ 2366], 80.00th=[ 4530], 90.00th=[ 6678], 95.00th=[ 8792], 00:17:54.514 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:54.514 | 99.99th=[ 8792] 00:17:54.514 lat (msec) : 20=1.20%, 50=19.28%, 100=19.28%, 250=25.30%, 500=4.82% 00:17:54.514 lat (msec) : >=2000=30.12% 00:17:54.514 cpu : usr=0.00%, sys=0.40%, ctx=168, majf=0, minf=21249 00:17:54.514 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:17:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.514 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:54.514 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.514 job4: (groupid=0, jobs=1): err= 0: pid=329856: Fri Apr 19 04:09:08 2024 00:17:54.514 read: IOPS=23, BW=23.8MiB/s (25.0MB/s)(285MiB/11972msec) 00:17:54.514 slat (usec): min=38, max=2112.4k, avg=41723.14, stdev=249135.71 00:17:54.514 clat (msec): min=79, max=8465, avg=3502.76, stdev=2447.44 00:17:54.514 lat (msec): min=1040, max=8468, avg=3544.48, stdev=2451.12 00:17:54.514 clat percentiles (msec): 00:17:54.514 | 1.00th=[ 1036], 5.00th=[ 1070], 10.00th=[ 1083], 20.00th=[ 1099], 00:17:54.514 | 30.00th=[ 1150], 40.00th=[ 2869], 50.00th=[ 3037], 60.00th=[ 3171], 00:17:54.514 | 70.00th=[ 3473], 80.00th=[ 7349], 90.00th=[ 7819], 95.00th=[ 7886], 00:17:54.514 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8490], 99.95th=[ 8490], 00:17:54.514 | 99.99th=[ 8490] 00:17:54.514 bw ( KiB/s): min= 4087, max=129024, per=2.21%, avg=64305.40, stdev=55762.97, samples=5 00:17:54.514 iops : min= 3, max= 126, avg=62.60, stdev=54.72, samples=5 00:17:54.514 lat (msec) : 100=0.35%, 2000=31.58%, >=2000=68.07% 00:17:54.514 cpu : usr=0.01%, sys=0.55%, ctx=401, majf=0, minf=32769 00:17:54.514 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=77.9% 00:17:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.514 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:17:54.514 issued rwts: total=285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.514 job4: (groupid=0, jobs=1): err= 0: pid=329857: Fri Apr 19 04:09:08 2024 00:17:54.514 read: IOPS=39, BW=39.3MiB/s (41.2MB/s)(398MiB/10119msec) 00:17:54.514 slat (usec): min=64, max=2089.8k, avg=25252.09, stdev=155205.49 00:17:54.514 clat (msec): min=67, max=4972, avg=2151.02, stdev=1611.40 00:17:54.514 lat (msec): min=145, max=4981, avg=2176.27, stdev=1621.69 00:17:54.514 clat percentiles (msec): 00:17:54.514 | 1.00th=[ 146], 5.00th=[ 262], 10.00th=[ 359], 20.00th=[ 609], 00:17:54.514 | 30.00th=[ 810], 40.00th=[ 1150], 50.00th=[ 1435], 60.00th=[ 1905], 00:17:54.514 | 70.00th=[ 3910], 80.00th=[ 4010], 90.00th=[ 4044], 95.00th=[ 4111], 00:17:54.514 | 99.00th=[ 4933], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:17:54.514 | 99.99th=[ 5000] 00:17:54.514 bw ( KiB/s): min= 4096, max=159744, per=2.37%, avg=69100.00, stdev=49544.70, samples=8 00:17:54.514 iops : min= 4, max= 156, avg=67.38, stdev=48.36, samples=8 00:17:54.514 lat (msec) : 100=0.25%, 250=4.52%, 500=11.31%, 750=11.31%, 1000=6.78% 00:17:54.514 lat (msec) : 2000=26.88%, >=2000=38.94% 00:17:54.514 cpu : usr=0.01%, sys=0.98%, ctx=767, majf=0, minf=32769 00:17:54.514 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.2% 00:17:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.514 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:54.514 issued rwts: total=398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.514 job4: (groupid=0, jobs=1): err= 0: pid=329858: Fri Apr 19 04:09:08 2024 00:17:54.514 read: IOPS=222, BW=222MiB/s (233MB/s)(2688MiB/12083msec) 00:17:54.514 slat (usec): min=46, max=2102.1k, avg=4456.15, stdev=56501.47 00:17:54.514 clat (msec): min=91, max=4583, avg=559.68, stdev=852.21 00:17:54.514 lat (msec): min=235, max=4585, avg=564.13, stdev=855.21 00:17:54.514 clat percentiles (msec): 00:17:54.514 | 1.00th=[ 236], 5.00th=[ 239], 10.00th=[ 239], 20.00th=[ 241], 00:17:54.514 | 30.00th=[ 247], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 372], 00:17:54.514 | 70.00th=[ 451], 80.00th=[ 489], 90.00th=[ 609], 95.00th=[ 2198], 00:17:54.514 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4597], 99.95th=[ 4597], 00:17:54.514 | 99.99th=[ 4597] 00:17:54.514 bw ( KiB/s): min= 2052, max=542720, per=10.58%, avg=308452.94, stdev=157609.47, samples=17 00:17:54.514 iops : min= 2, max= 530, avg=301.18, stdev=153.94, samples=17 00:17:54.514 lat (msec) : 100=0.04%, 250=35.31%, 500=47.02%, 750=12.39%, >=2000=5.25% 00:17:54.514 cpu : usr=0.09%, sys=2.80%, ctx=2433, majf=0, minf=32769 00:17:54.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:17:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.514 issued rwts: total=2688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.514 job4: (groupid=0, jobs=1): err= 0: pid=329859: Fri Apr 19 04:09:08 2024 00:17:54.514 read: IOPS=16, BW=16.3MiB/s (17.1MB/s)(196MiB/11999msec) 00:17:54.514 slat (usec): min=994, max=2120.6k, avg=51032.81, stdev=263597.31 00:17:54.514 clat (msec): min=1825, max=9161, avg=4184.97, stdev=2535.09 00:17:54.514 lat (msec): min=1873, max=9169, avg=4236.00, stdev=2561.88 00:17:54.514 clat percentiles (msec): 00:17:54.514 | 1.00th=[ 1854], 5.00th=[ 1905], 10.00th=[ 1972], 20.00th=[ 2299], 00:17:54.514 | 30.00th=[ 2567], 40.00th=[ 2836], 50.00th=[ 3138], 60.00th=[ 3440], 00:17:54.514 | 70.00th=[ 3809], 80.00th=[ 6141], 90.00th=[ 9060], 95.00th=[ 9060], 00:17:54.514 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:17:54.514 | 99.99th=[ 9194] 00:17:54.514 bw ( KiB/s): min= 2048, max=73728, per=1.62%, avg=47104.00, stdev=39234.04, samples=3 00:17:54.514 iops : min= 2, max= 72, avg=46.00, stdev=38.31, samples=3 00:17:54.514 lat (msec) : 2000=11.22%, >=2000=88.78% 00:17:54.514 cpu : usr=0.00%, sys=0.63%, ctx=472, majf=0, minf=32769 00:17:54.514 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.1%, 16=8.2%, 32=16.3%, >=64=67.9% 00:17:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.514 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:17:54.514 issued rwts: total=196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.514 job5: (groupid=0, jobs=1): err= 0: pid=329863: Fri Apr 19 04:09:08 2024 00:17:54.514 read: IOPS=38, BW=38.1MiB/s (40.0MB/s)(461MiB/12087msec) 00:17:54.514 slat (usec): min=609, max=2027.0k, avg=26000.19, stdev=137134.24 00:17:54.514 clat (msec): min=98, max=7418, avg=3149.63, stdev=1477.07 00:17:54.514 lat (msec): min=1400, max=7470, avg=3175.63, stdev=1483.12 00:17:54.514 clat percentiles (msec): 00:17:54.514 | 1.00th=[ 1401], 5.00th=[ 1485], 10.00th=[ 1586], 20.00th=[ 1770], 00:17:54.514 | 30.00th=[ 1888], 40.00th=[ 2165], 50.00th=[ 2735], 60.00th=[ 3406], 00:17:54.514 | 70.00th=[ 4212], 80.00th=[ 5067], 90.00th=[ 5134], 95.00th=[ 5201], 00:17:54.514 | 99.00th=[ 6409], 99.50th=[ 6477], 99.90th=[ 7416], 99.95th=[ 7416], 00:17:54.514 | 99.99th=[ 7416] 00:17:54.514 bw ( KiB/s): min= 2052, max=94208, per=1.68%, avg=48859.71, stdev=28994.41, samples=14 00:17:54.514 iops : min= 2, max= 92, avg=47.71, stdev=28.32, samples=14 00:17:54.514 lat (msec) : 100=0.22%, 2000=33.41%, >=2000=66.38% 00:17:54.514 cpu : usr=0.00%, sys=1.04%, ctx=1166, majf=0, minf=32769 00:17:54.514 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.3% 00:17:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.514 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:54.514 issued rwts: total=461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.514 job5: (groupid=0, jobs=1): err= 0: pid=329864: Fri Apr 19 04:09:08 2024 00:17:54.514 read: IOPS=29, BW=29.1MiB/s (30.5MB/s)(350MiB/12044msec) 00:17:54.514 slat (usec): min=107, max=2167.3k, avg=34122.51, stdev=220274.64 00:17:54.514 clat (msec): min=99, max=10087, avg=4182.57, stdev=3793.13 00:17:54.514 lat (msec): min=1018, max=10096, avg=4216.69, stdev=3796.15 00:17:54.514 clat percentiles (msec): 00:17:54.514 | 1.00th=[ 1011], 5.00th=[ 1045], 10.00th=[ 1062], 20.00th=[ 1116], 00:17:54.514 | 30.00th=[ 1183], 40.00th=[ 1200], 50.00th=[ 1284], 60.00th=[ 3775], 00:17:54.514 | 70.00th=[ 7953], 80.00th=[ 9463], 90.00th=[ 9731], 95.00th=[10000], 00:17:54.514 | 99.00th=[10000], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:17:54.514 | 99.99th=[10134] 00:17:54.514 bw ( KiB/s): min= 2052, max=122880, per=1.74%, avg=50745.33, stdev=46889.04, samples=9 00:17:54.515 iops : min= 2, max= 120, avg=49.56, stdev=45.79, samples=9 00:17:54.515 lat (msec) : 100=0.29%, 2000=56.57%, >=2000=43.14% 00:17:54.515 cpu : usr=0.01%, sys=0.87%, ctx=878, majf=0, minf=32769 00:17:54.515 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.1%, >=64=82.0% 00:17:54.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.515 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:54.515 issued rwts: total=350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.515 job5: (groupid=0, jobs=1): err= 0: pid=329865: Fri Apr 19 04:09:08 2024 00:17:54.515 read: IOPS=84, BW=84.9MiB/s (89.1MB/s)(1026MiB/12080msec) 00:17:54.515 slat (usec): min=69, max=2030.5k, avg=11672.17, stdev=64282.36 00:17:54.515 clat (msec): min=98, max=3884, avg=1407.47, stdev=742.84 00:17:54.515 lat (msec): min=818, max=3885, avg=1419.14, stdev=743.37 00:17:54.515 clat percentiles (msec): 00:17:54.515 | 1.00th=[ 827], 5.00th=[ 844], 10.00th=[ 860], 20.00th=[ 927], 00:17:54.515 | 30.00th=[ 978], 40.00th=[ 1011], 50.00th=[ 1083], 60.00th=[ 1217], 00:17:54.515 | 70.00th=[ 1485], 80.00th=[ 1703], 90.00th=[ 2802], 95.00th=[ 3339], 00:17:54.515 | 99.00th=[ 3809], 99.50th=[ 3809], 99.90th=[ 3842], 99.95th=[ 3876], 00:17:54.515 | 99.99th=[ 3876] 00:17:54.515 bw ( KiB/s): min= 2052, max=157696, per=3.51%, avg=102255.06, stdev=44131.39, samples=18 00:17:54.515 iops : min= 2, max= 154, avg=99.78, stdev=43.02, samples=18 00:17:54.515 lat (msec) : 100=0.10%, 1000=36.84%, 2000=50.68%, >=2000=12.38% 00:17:54.515 cpu : usr=0.02%, sys=1.35%, ctx=1586, majf=0, minf=32769 00:17:54.515 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:17:54.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.515 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.515 issued rwts: total=1026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.515 job5: (groupid=0, jobs=1): err= 0: pid=329866: Fri Apr 19 04:09:08 2024 00:17:54.515 read: IOPS=35, BW=35.3MiB/s (37.0MB/s)(355MiB/10058msec) 00:17:54.515 slat (usec): min=359, max=2156.8k, avg=28177.89, stdev=191380.02 00:17:54.515 clat (msec): min=52, max=7746, avg=3261.30, stdev=2946.95 00:17:54.515 lat (msec): min=95, max=7758, avg=3289.48, stdev=2947.65 00:17:54.515 clat percentiles (msec): 00:17:54.515 | 1.00th=[ 122], 5.00th=[ 330], 10.00th=[ 860], 20.00th=[ 919], 00:17:54.515 | 30.00th=[ 986], 40.00th=[ 1070], 50.00th=[ 1435], 60.00th=[ 1586], 00:17:54.515 | 70.00th=[ 6879], 80.00th=[ 7215], 90.00th=[ 7550], 95.00th=[ 7617], 00:17:54.515 | 99.00th=[ 7684], 99.50th=[ 7752], 99.90th=[ 7752], 99.95th=[ 7752], 00:17:54.515 | 99.99th=[ 7752] 00:17:54.515 bw ( KiB/s): min= 6144, max=161792, per=2.29%, avg=66682.29, stdev=54045.07, samples=7 00:17:54.515 iops : min= 6, max= 158, avg=65.00, stdev=52.74, samples=7 00:17:54.515 lat (msec) : 100=0.56%, 250=3.10%, 500=2.25%, 1000=27.32%, 2000=28.17% 00:17:54.515 lat (msec) : >=2000=38.59% 00:17:54.515 cpu : usr=0.01%, sys=0.82%, ctx=847, majf=0, minf=32769 00:17:54.515 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.3% 00:17:54.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.515 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:54.515 issued rwts: total=355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.515 job5: (groupid=0, jobs=1): err= 0: pid=329867: Fri Apr 19 04:09:08 2024 00:17:54.515 read: IOPS=29, BW=29.1MiB/s (30.5MB/s)(351MiB/12070msec) 00:17:54.515 slat (usec): min=36, max=2155.0k, avg=34100.17, stdev=219454.30 00:17:54.515 clat (msec): min=99, max=6418, avg=4002.83, stdev=1045.93 00:17:54.515 lat (msec): min=2136, max=6431, avg=4036.93, stdev=1029.89 00:17:54.515 clat percentiles (msec): 00:17:54.515 | 1.00th=[ 2232], 5.00th=[ 2400], 10.00th=[ 2467], 20.00th=[ 2702], 00:17:54.515 | 30.00th=[ 3876], 40.00th=[ 3977], 50.00th=[ 4077], 60.00th=[ 4245], 00:17:54.515 | 70.00th=[ 4866], 80.00th=[ 5067], 90.00th=[ 5269], 95.00th=[ 5336], 00:17:54.515 | 99.00th=[ 5336], 99.50th=[ 6342], 99.90th=[ 6409], 99.95th=[ 6409], 00:17:54.515 | 99.99th=[ 6409] 00:17:54.515 bw ( KiB/s): min= 2052, max=149504, per=2.25%, avg=65536.57, stdev=58561.57, samples=7 00:17:54.515 iops : min= 2, max= 146, avg=64.00, stdev=57.19, samples=7 00:17:54.515 lat (msec) : 100=0.28%, >=2000=99.72% 00:17:54.515 cpu : usr=0.00%, sys=0.77%, ctx=703, majf=0, minf=32769 00:17:54.515 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.1%, >=64=82.1% 00:17:54.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.515 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:54.515 issued rwts: total=351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.515 job5: (groupid=0, jobs=1): err= 0: pid=329868: Fri Apr 19 04:09:08 2024 00:17:54.515 read: IOPS=86, BW=86.1MiB/s (90.3MB/s)(1029MiB/11952msec) 00:17:54.515 slat (usec): min=43, max=2141.5k, avg=11528.79, stdev=120902.84 00:17:54.515 clat (msec): min=86, max=6998, avg=1360.87, stdev=2095.36 00:17:54.515 lat (msec): min=108, max=7004, avg=1372.40, stdev=2103.24 00:17:54.515 clat percentiles (msec): 00:17:54.515 | 1.00th=[ 108], 5.00th=[ 109], 10.00th=[ 109], 20.00th=[ 126], 00:17:54.515 | 30.00th=[ 186], 40.00th=[ 264], 50.00th=[ 393], 60.00th=[ 468], 00:17:54.515 | 70.00th=[ 493], 80.00th=[ 2433], 90.00th=[ 6275], 95.00th=[ 6812], 00:17:54.515 | 99.00th=[ 6946], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:17:54.515 | 99.99th=[ 7013] 00:17:54.515 bw ( KiB/s): min= 4096, max=684032, per=7.03%, avg=204993.44, stdev=249675.93, samples=9 00:17:54.515 iops : min= 4, max= 668, avg=200.11, stdev=243.84, samples=9 00:17:54.515 lat (msec) : 100=0.10%, 250=38.19%, 500=33.04%, 750=0.78%, 2000=2.43% 00:17:54.515 lat (msec) : >=2000=25.46% 00:17:54.515 cpu : usr=0.01%, sys=1.10%, ctx=1548, majf=0, minf=32769 00:17:54.515 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:17:54.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.515 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.515 issued rwts: total=1029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.515 job5: (groupid=0, jobs=1): err= 0: pid=329869: Fri Apr 19 04:09:08 2024 00:17:54.515 read: IOPS=51, BW=51.0MiB/s (53.5MB/s)(515MiB/10093msec) 00:17:54.515 slat (usec): min=97, max=2073.3k, avg=19437.07, stdev=109927.35 00:17:54.515 clat (msec): min=80, max=6109, avg=2313.16, stdev=2005.97 00:17:54.515 lat (msec): min=100, max=6122, avg=2332.60, stdev=2011.44 00:17:54.515 clat percentiles (msec): 00:17:54.515 | 1.00th=[ 136], 5.00th=[ 426], 10.00th=[ 751], 20.00th=[ 936], 00:17:54.515 | 30.00th=[ 1099], 40.00th=[ 1234], 50.00th=[ 1401], 60.00th=[ 1519], 00:17:54.515 | 70.00th=[ 1720], 80.00th=[ 5336], 90.00th=[ 5940], 95.00th=[ 6007], 00:17:54.515 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6141], 99.95th=[ 6141], 00:17:54.515 | 99.99th=[ 6141] 00:17:54.515 bw ( KiB/s): min= 2048, max=137216, per=2.09%, avg=60967.38, stdev=46455.11, samples=13 00:17:54.515 iops : min= 2, max= 134, avg=59.54, stdev=45.37, samples=13 00:17:54.515 lat (msec) : 100=0.19%, 250=2.52%, 500=3.50%, 750=3.69%, 1000=15.53% 00:17:54.515 lat (msec) : 2000=46.60%, >=2000=27.96% 00:17:54.515 cpu : usr=0.00%, sys=1.35%, ctx=1178, majf=0, minf=32769 00:17:54.515 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.8% 00:17:54.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.515 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:54.515 issued rwts: total=515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.515 job5: (groupid=0, jobs=1): err= 0: pid=329870: Fri Apr 19 04:09:08 2024 00:17:54.515 read: IOPS=40, BW=40.2MiB/s (42.1MB/s)(482MiB/11991msec) 00:17:54.515 slat (usec): min=348, max=2106.9k, avg=24688.86, stdev=188173.63 00:17:54.515 clat (msec): min=88, max=8123, avg=2884.90, stdev=2862.62 00:17:54.515 lat (msec): min=445, max=8127, avg=2909.59, stdev=2869.02 00:17:54.515 clat percentiles (msec): 00:17:54.515 | 1.00th=[ 443], 5.00th=[ 447], 10.00th=[ 456], 20.00th=[ 464], 00:17:54.515 | 30.00th=[ 477], 40.00th=[ 1150], 50.00th=[ 2232], 60.00th=[ 2400], 00:17:54.515 | 70.00th=[ 2601], 80.00th=[ 7617], 90.00th=[ 7752], 95.00th=[ 7953], 00:17:54.515 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8154], 99.95th=[ 8154], 00:17:54.515 | 99.99th=[ 8154] 00:17:54.515 bw ( KiB/s): min=30720, max=276480, per=4.97%, avg=144959.80, stdev=91160.69, samples=5 00:17:54.515 iops : min= 30, max= 270, avg=141.40, stdev=89.13, samples=5 00:17:54.515 lat (msec) : 100=0.21%, 500=33.40%, 750=1.04%, 1000=1.66%, 2000=11.41% 00:17:54.515 lat (msec) : >=2000=52.28% 00:17:54.515 cpu : usr=0.00%, sys=0.78%, ctx=1018, majf=0, minf=32769 00:17:54.515 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.6%, >=64=86.9% 00:17:54.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.515 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:17:54.515 issued rwts: total=482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.515 job5: (groupid=0, jobs=1): err= 0: pid=329871: Fri Apr 19 04:09:08 2024 00:17:54.515 read: IOPS=63, BW=63.2MiB/s (66.3MB/s)(636MiB/10066msec) 00:17:54.515 slat (usec): min=37, max=2098.6k, avg=15739.25, stdev=117046.77 00:17:54.515 clat (msec): min=53, max=3388, avg=1482.60, stdev=866.97 00:17:54.515 lat (msec): min=101, max=3404, avg=1498.34, stdev=869.11 00:17:54.515 clat percentiles (msec): 00:17:54.515 | 1.00th=[ 234], 5.00th=[ 422], 10.00th=[ 567], 20.00th=[ 735], 00:17:54.515 | 30.00th=[ 1020], 40.00th=[ 1150], 50.00th=[ 1284], 60.00th=[ 1368], 00:17:54.515 | 70.00th=[ 1536], 80.00th=[ 2802], 90.00th=[ 3004], 95.00th=[ 3104], 00:17:54.515 | 99.00th=[ 3306], 99.50th=[ 3306], 99.90th=[ 3373], 99.95th=[ 3373], 00:17:54.515 | 99.99th=[ 3373] 00:17:54.515 bw ( KiB/s): min= 4096, max=272384, per=3.24%, avg=94580.36, stdev=80524.79, samples=11 00:17:54.515 iops : min= 4, max= 266, avg=92.36, stdev=78.64, samples=11 00:17:54.515 lat (msec) : 100=0.16%, 250=1.10%, 500=6.76%, 750=12.42%, 1000=8.81% 00:17:54.515 lat (msec) : 2000=49.53%, >=2000=21.23% 00:17:54.515 cpu : usr=0.00%, sys=0.83%, ctx=1067, majf=0, minf=32769 00:17:54.515 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:17:54.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.515 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:17:54.515 issued rwts: total=636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.515 job5: (groupid=0, jobs=1): err= 0: pid=329872: Fri Apr 19 04:09:08 2024 00:17:54.515 read: IOPS=110, BW=110MiB/s (115MB/s)(1104MiB/10028msec) 00:17:54.515 slat (usec): min=35, max=2099.1k, avg=9057.49, stdev=99218.56 00:17:54.515 clat (msec): min=25, max=4734, avg=889.29, stdev=1366.11 00:17:54.515 lat (msec): min=28, max=4748, avg=898.34, stdev=1374.26 00:17:54.515 clat percentiles (msec): 00:17:54.515 | 1.00th=[ 47], 5.00th=[ 125], 10.00th=[ 211], 20.00th=[ 226], 00:17:54.515 | 30.00th=[ 230], 40.00th=[ 234], 50.00th=[ 241], 60.00th=[ 342], 00:17:54.515 | 70.00th=[ 418], 80.00th=[ 768], 90.00th=[ 4279], 95.00th=[ 4396], 00:17:54.515 | 99.00th=[ 4665], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:17:54.515 | 99.99th=[ 4732] 00:17:54.515 bw ( KiB/s): min= 4096, max=557056, per=6.86%, avg=200089.60, stdev=217709.77, samples=10 00:17:54.515 iops : min= 4, max= 544, avg=195.40, stdev=212.61, samples=10 00:17:54.515 lat (msec) : 50=1.36%, 100=2.36%, 250=49.91%, 500=17.93%, 750=7.61% 00:17:54.515 lat (msec) : 1000=1.81%, 2000=4.98%, >=2000=14.04% 00:17:54.515 cpu : usr=0.03%, sys=1.03%, ctx=1245, majf=0, minf=32769 00:17:54.515 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:17:54.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.515 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.515 issued rwts: total=1104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.516 job5: (groupid=0, jobs=1): err= 0: pid=329873: Fri Apr 19 04:09:08 2024 00:17:54.516 read: IOPS=23, BW=23.2MiB/s (24.3MB/s)(234MiB/10080msec) 00:17:54.516 slat (usec): min=43, max=2099.6k, avg=42734.90, stdev=222145.96 00:17:54.516 clat (msec): min=78, max=8767, avg=4321.46, stdev=3812.74 00:17:54.516 lat (msec): min=80, max=8774, avg=4364.20, stdev=3819.30 00:17:54.516 clat percentiles (msec): 00:17:54.516 | 1.00th=[ 82], 5.00th=[ 142], 10.00th=[ 288], 20.00th=[ 527], 00:17:54.516 | 30.00th=[ 810], 40.00th=[ 1167], 50.00th=[ 2165], 60.00th=[ 8221], 00:17:54.516 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8792], 00:17:54.516 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:17:54.516 | 99.99th=[ 8792] 00:17:54.516 bw ( KiB/s): min=14336, max=104448, per=1.88%, avg=54784.00, stdev=41850.58, samples=4 00:17:54.516 iops : min= 14, max= 102, avg=53.50, stdev=40.87, samples=4 00:17:54.516 lat (msec) : 100=2.99%, 250=5.13%, 500=11.11%, 750=8.97%, 1000=6.84% 00:17:54.516 lat (msec) : 2000=11.97%, >=2000=52.99% 00:17:54.516 cpu : usr=0.00%, sys=0.74%, ctx=658, majf=0, minf=32769 00:17:54.516 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.7%, >=64=73.1% 00:17:54.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.516 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:17:54.516 issued rwts: total=234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.516 job5: (groupid=0, jobs=1): err= 0: pid=329874: Fri Apr 19 04:09:08 2024 00:17:54.516 read: IOPS=7, BW=7284KiB/s (7458kB/s)(85.0MiB/11950msec) 00:17:54.516 slat (usec): min=397, max=2090.6k, avg=139463.40, stdev=496477.02 00:17:54.516 clat (msec): min=94, max=11948, avg=7375.46, stdev=3722.96 00:17:54.516 lat (msec): min=2116, max=11949, avg=7514.92, stdev=3668.61 00:17:54.516 clat percentiles (msec): 00:17:54.516 | 1.00th=[ 95], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 2232], 00:17:54.516 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[ 8658], 00:17:54.516 | 70.00th=[10805], 80.00th=[10805], 90.00th=[11879], 95.00th=[11879], 00:17:54.516 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:17:54.516 | 99.99th=[12013] 00:17:54.516 lat (msec) : 100=1.18%, >=2000=98.82% 00:17:54.516 cpu : usr=0.01%, sys=0.38%, ctx=87, majf=0, minf=21761 00:17:54.516 IO depths : 1=1.2%, 2=2.4%, 4=4.7%, 8=9.4%, 16=18.8%, 32=37.6%, >=64=25.9% 00:17:54.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.516 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:17:54.516 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.516 job5: (groupid=0, jobs=1): err= 0: pid=329875: Fri Apr 19 04:09:08 2024 00:17:54.516 read: IOPS=38, BW=38.0MiB/s (39.9MB/s)(382MiB/10048msec) 00:17:54.516 slat (usec): min=656, max=2050.2k, avg=26182.81, stdev=126940.42 00:17:54.516 clat (msec): min=43, max=6500, avg=2972.74, stdev=2004.74 00:17:54.516 lat (msec): min=48, max=6536, avg=2998.93, stdev=2006.56 00:17:54.516 clat percentiles (msec): 00:17:54.516 | 1.00th=[ 53], 5.00th=[ 785], 10.00th=[ 1334], 20.00th=[ 1418], 00:17:54.516 | 30.00th=[ 1519], 40.00th=[ 1720], 50.00th=[ 1989], 60.00th=[ 2165], 00:17:54.516 | 70.00th=[ 5067], 80.00th=[ 5537], 90.00th=[ 6074], 95.00th=[ 6342], 00:17:54.516 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:17:54.516 | 99.99th=[ 6477] 00:17:54.516 bw ( KiB/s): min= 4096, max=116736, per=1.49%, avg=43515.50, stdev=33189.89, samples=12 00:17:54.516 iops : min= 4, max= 114, avg=42.33, stdev=32.58, samples=12 00:17:54.516 lat (msec) : 50=0.52%, 100=1.31%, 250=0.79%, 500=1.05%, 750=0.79% 00:17:54.516 lat (msec) : 1000=1.57%, 2000=45.29%, >=2000=48.69% 00:17:54.516 cpu : usr=0.02%, sys=0.97%, ctx=1154, majf=0, minf=32769 00:17:54.516 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.5% 00:17:54.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.516 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:17:54.516 issued rwts: total=382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.516 00:17:54.516 Run status group 0 (all jobs): 00:17:54.516 READ: bw=2848MiB/s (2986MB/s), 802KiB/s-222MiB/s (821kB/s-233MB/s), io=39.3GiB (42.2GB), run=10027-14143msec 00:17:54.516 00:17:54.516 Disk stats (read/write): 00:17:54.516 nvme0n1: ios=67554/0, merge=0/0, ticks=8668782/0, in_queue=8668782, util=98.90% 00:17:54.516 nvme1n1: ios=49213/0, merge=0/0, ticks=8533142/0, in_queue=8533142, util=99.09% 00:17:54.516 nvme2n1: ios=45211/0, merge=0/0, ticks=6926050/0, in_queue=6926050, util=99.02% 00:17:54.516 nvme3n1: ios=55311/0, merge=0/0, ticks=7204729/0, in_queue=7204729, util=99.19% 00:17:54.516 nvme4n1: ios=48325/0, merge=0/0, ticks=9089718/0, in_queue=9089718, util=99.24% 00:17:54.516 nvme5n1: ios=56060/0, merge=0/0, ticks=8378796/0, in_queue=8378796, util=99.33% 00:17:54.516 04:09:08 -- target/srq_overwhelm.sh@38 -- # sync 00:17:54.516 04:09:08 -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:17:54.516 04:09:08 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:54.516 04:09:08 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:17:55.095 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.095 04:09:09 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:17:55.095 04:09:09 -- common/autotest_common.sh@1205 -- # local i=0 00:17:55.095 04:09:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:55.095 04:09:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000000 00:17:55.095 04:09:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:55.095 04:09:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000000 00:17:55.095 04:09:09 -- common/autotest_common.sh@1217 -- # return 0 00:17:55.095 04:09:09 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:55.095 04:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.095 04:09:09 -- common/autotest_common.sh@10 -- # set +x 00:17:55.352 04:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.352 04:09:09 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:55.352 04:09:09 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:56.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.282 04:09:10 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:17:56.282 04:09:10 -- common/autotest_common.sh@1205 -- # local i=0 00:17:56.282 04:09:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:56.282 04:09:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000001 00:17:56.283 04:09:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:56.283 04:09:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000001 00:17:56.283 04:09:10 -- common/autotest_common.sh@1217 -- # return 0 00:17:56.283 04:09:10 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.283 04:09:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.283 04:09:10 -- common/autotest_common.sh@10 -- # set +x 00:17:56.283 04:09:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.283 04:09:10 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:56.283 04:09:10 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:17:57.216 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:17:57.216 04:09:11 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:17:57.216 04:09:11 -- common/autotest_common.sh@1205 -- # local i=0 00:17:57.216 04:09:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:57.216 04:09:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000002 00:17:57.216 04:09:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:57.216 04:09:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000002 00:17:57.216 04:09:11 -- common/autotest_common.sh@1217 -- # return 0 00:17:57.216 04:09:11 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:57.216 04:09:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:57.216 04:09:11 -- common/autotest_common.sh@10 -- # set +x 00:17:57.216 04:09:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:57.216 04:09:11 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:57.216 04:09:11 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:17:58.148 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:17:58.148 04:09:12 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:17:58.148 04:09:12 -- common/autotest_common.sh@1205 -- # local i=0 00:17:58.148 04:09:12 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:58.148 04:09:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000003 00:17:58.148 04:09:12 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:58.148 04:09:12 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000003 00:17:58.148 04:09:12 -- common/autotest_common.sh@1217 -- # return 0 00:17:58.148 04:09:12 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:58.148 04:09:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.148 04:09:12 -- common/autotest_common.sh@10 -- # set +x 00:17:58.148 04:09:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.148 04:09:12 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:58.148 04:09:12 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:17:59.081 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:17:59.081 04:09:13 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:17:59.081 04:09:13 -- common/autotest_common.sh@1205 -- # local i=0 00:17:59.081 04:09:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:59.081 04:09:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000004 00:17:59.081 04:09:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000004 00:17:59.081 04:09:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:59.081 04:09:13 -- common/autotest_common.sh@1217 -- # return 0 00:17:59.081 04:09:13 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:59.081 04:09:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:59.081 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.081 04:09:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:59.081 04:09:13 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:17:59.081 04:09:13 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:00.012 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:00.012 04:09:14 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:18:00.012 04:09:14 -- common/autotest_common.sh@1205 -- # local i=0 00:18:00.012 04:09:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:00.012 04:09:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000005 00:18:00.012 04:09:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:00.012 04:09:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000005 00:18:00.012 04:09:14 -- common/autotest_common.sh@1217 -- # return 0 00:18:00.012 04:09:14 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:00.012 04:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:00.013 04:09:14 -- common/autotest_common.sh@10 -- # set +x 00:18:00.013 04:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:00.013 04:09:14 -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:00.013 04:09:14 -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:18:00.013 04:09:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:00.013 04:09:14 -- nvmf/common.sh@117 -- # sync 00:18:00.013 04:09:14 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:00.013 04:09:14 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:00.013 04:09:14 -- nvmf/common.sh@120 -- # set +e 00:18:00.013 04:09:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:00.013 04:09:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:00.013 rmmod nvme_rdma 00:18:00.013 rmmod nvme_fabrics 00:18:00.013 04:09:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:00.013 04:09:14 -- nvmf/common.sh@124 -- # set -e 00:18:00.013 04:09:14 -- nvmf/common.sh@125 -- # return 0 00:18:00.013 04:09:14 -- nvmf/common.sh@478 -- # '[' -n 328211 ']' 00:18:00.013 04:09:14 -- nvmf/common.sh@479 -- # killprocess 328211 00:18:00.013 04:09:14 -- common/autotest_common.sh@936 -- # '[' -z 328211 ']' 00:18:00.013 04:09:14 -- common/autotest_common.sh@940 -- # kill -0 328211 00:18:00.013 04:09:14 -- common/autotest_common.sh@941 -- # uname 00:18:00.271 04:09:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.271 04:09:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 328211 00:18:00.271 04:09:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:00.271 04:09:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:00.271 04:09:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 328211' 00:18:00.271 killing process with pid 328211 00:18:00.271 04:09:14 -- common/autotest_common.sh@955 -- # kill 328211 00:18:00.271 04:09:14 -- common/autotest_common.sh@960 -- # wait 328211 00:18:00.530 04:09:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:00.530 04:09:14 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:00.530 00:18:00.530 real 0m33.972s 00:18:00.530 user 2m1.498s 00:18:00.530 sys 0m13.566s 00:18:00.530 04:09:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:00.530 04:09:14 -- common/autotest_common.sh@10 -- # set +x 00:18:00.530 ************************************ 00:18:00.530 END TEST nvmf_srq_overwhelm 00:18:00.530 ************************************ 00:18:00.530 04:09:14 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:18:00.530 04:09:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:00.530 04:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:00.530 04:09:14 -- common/autotest_common.sh@10 -- # set +x 00:18:00.788 ************************************ 00:18:00.789 START TEST nvmf_shutdown 00:18:00.789 ************************************ 00:18:00.789 04:09:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:18:00.789 * Looking for test storage... 00:18:00.789 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:00.789 04:09:15 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.789 04:09:15 -- nvmf/common.sh@7 -- # uname -s 00:18:00.789 04:09:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.789 04:09:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.789 04:09:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.789 04:09:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.789 04:09:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.789 04:09:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.789 04:09:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.789 04:09:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.789 04:09:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.789 04:09:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.789 04:09:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:00.789 04:09:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:00.789 04:09:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.789 04:09:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.789 04:09:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.789 04:09:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.789 04:09:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:00.789 04:09:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.789 04:09:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.789 04:09:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.789 04:09:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.789 04:09:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.789 04:09:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.789 04:09:15 -- paths/export.sh@5 -- # export PATH 00:18:00.789 04:09:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.789 04:09:15 -- nvmf/common.sh@47 -- # : 0 00:18:00.789 04:09:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:00.789 04:09:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:00.789 04:09:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.789 04:09:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.789 04:09:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.789 04:09:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:00.789 04:09:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:00.789 04:09:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:00.789 04:09:15 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:00.789 04:09:15 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:00.789 04:09:15 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:18:00.789 04:09:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:00.789 04:09:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:00.789 04:09:15 -- common/autotest_common.sh@10 -- # set +x 00:18:01.047 ************************************ 00:18:01.047 START TEST nvmf_shutdown_tc1 00:18:01.047 ************************************ 00:18:01.047 04:09:15 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:18:01.047 04:09:15 -- target/shutdown.sh@74 -- # starttarget 00:18:01.047 04:09:15 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:01.047 04:09:15 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:01.047 04:09:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.047 04:09:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:01.048 04:09:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:01.048 04:09:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:01.048 04:09:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.048 04:09:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.048 04:09:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.048 04:09:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:01.048 04:09:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:01.048 04:09:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:01.048 04:09:15 -- common/autotest_common.sh@10 -- # set +x 00:18:06.322 04:09:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:06.322 04:09:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:06.322 04:09:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:06.322 04:09:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:06.322 04:09:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:06.322 04:09:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:06.322 04:09:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:06.322 04:09:20 -- nvmf/common.sh@295 -- # net_devs=() 00:18:06.322 04:09:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:06.322 04:09:20 -- nvmf/common.sh@296 -- # e810=() 00:18:06.322 04:09:20 -- nvmf/common.sh@296 -- # local -ga e810 00:18:06.322 04:09:20 -- nvmf/common.sh@297 -- # x722=() 00:18:06.322 04:09:20 -- nvmf/common.sh@297 -- # local -ga x722 00:18:06.322 04:09:20 -- nvmf/common.sh@298 -- # mlx=() 00:18:06.322 04:09:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:06.322 04:09:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.322 04:09:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.322 04:09:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.322 04:09:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.322 04:09:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.322 04:09:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.322 04:09:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.322 04:09:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.322 04:09:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.322 04:09:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.322 04:09:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.322 04:09:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:06.322 04:09:20 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:06.322 04:09:20 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:06.322 04:09:20 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:06.322 04:09:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:06.322 04:09:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.322 04:09:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:06.322 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:06.322 04:09:20 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:06.322 04:09:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.322 04:09:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:06.322 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:06.322 04:09:20 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:06.322 04:09:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:06.322 04:09:20 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.322 04:09:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.322 04:09:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:06.322 04:09:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.322 04:09:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:06.322 Found net devices under 0000:18:00.0: mlx_0_0 00:18:06.322 04:09:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.322 04:09:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.322 04:09:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.322 04:09:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:06.322 04:09:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.322 04:09:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:06.322 Found net devices under 0000:18:00.1: mlx_0_1 00:18:06.322 04:09:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.322 04:09:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:06.322 04:09:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:06.322 04:09:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:06.322 04:09:20 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:06.322 04:09:20 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:06.322 04:09:20 -- nvmf/common.sh@58 -- # uname 00:18:06.322 04:09:20 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:06.322 04:09:20 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:06.322 04:09:20 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:06.322 04:09:20 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:06.322 04:09:20 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:06.322 04:09:20 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:06.322 04:09:20 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:06.322 04:09:20 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:06.322 04:09:20 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:06.322 04:09:20 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:06.323 04:09:20 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:06.323 04:09:20 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:06.323 04:09:20 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:06.323 04:09:20 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:06.323 04:09:20 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:06.323 04:09:20 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:06.323 04:09:20 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:06.323 04:09:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.323 04:09:20 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:06.323 04:09:20 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:06.323 04:09:20 -- nvmf/common.sh@105 -- # continue 2 00:18:06.323 04:09:20 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:06.323 04:09:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.323 04:09:20 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:06.323 04:09:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.323 04:09:20 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:06.323 04:09:20 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:06.323 04:09:20 -- nvmf/common.sh@105 -- # continue 2 00:18:06.323 04:09:20 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:06.323 04:09:20 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:06.323 04:09:20 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:06.323 04:09:20 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:06.323 04:09:20 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:06.323 04:09:20 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:06.323 04:09:20 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:06.323 04:09:20 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:06.323 04:09:20 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:06.323 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:06.323 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:06.323 altname enp24s0f0np0 00:18:06.323 altname ens785f0np0 00:18:06.323 inet 192.168.100.8/24 scope global mlx_0_0 00:18:06.323 valid_lft forever preferred_lft forever 00:18:06.323 04:09:20 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:06.582 04:09:20 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:06.582 04:09:20 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:06.582 04:09:20 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:06.582 04:09:20 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:06.582 04:09:20 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:06.582 04:09:20 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:06.582 04:09:20 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:06.582 04:09:20 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:06.582 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:06.582 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:06.582 altname enp24s0f1np1 00:18:06.582 altname ens785f1np1 00:18:06.582 inet 192.168.100.9/24 scope global mlx_0_1 00:18:06.582 valid_lft forever preferred_lft forever 00:18:06.582 04:09:20 -- nvmf/common.sh@411 -- # return 0 00:18:06.582 04:09:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:06.582 04:09:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:06.582 04:09:20 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:06.582 04:09:20 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:06.582 04:09:20 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:06.582 04:09:20 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:06.582 04:09:20 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:06.582 04:09:20 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:06.582 04:09:20 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:06.582 04:09:20 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:06.582 04:09:20 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:06.582 04:09:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.582 04:09:20 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:06.582 04:09:20 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:06.582 04:09:20 -- nvmf/common.sh@105 -- # continue 2 00:18:06.582 04:09:20 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:06.582 04:09:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.582 04:09:20 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:06.582 04:09:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.582 04:09:20 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:06.582 04:09:20 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:06.582 04:09:20 -- nvmf/common.sh@105 -- # continue 2 00:18:06.582 04:09:20 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:06.582 04:09:20 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:06.582 04:09:20 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:06.582 04:09:20 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:06.583 04:09:20 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:06.583 04:09:20 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:06.583 04:09:20 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:06.583 04:09:20 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:06.583 04:09:20 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:06.583 04:09:20 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:06.583 04:09:20 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:06.583 04:09:20 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:06.583 04:09:20 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:06.583 192.168.100.9' 00:18:06.583 04:09:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:06.583 192.168.100.9' 00:18:06.583 04:09:20 -- nvmf/common.sh@446 -- # head -n 1 00:18:06.583 04:09:20 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:06.583 04:09:20 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:06.583 192.168.100.9' 00:18:06.583 04:09:20 -- nvmf/common.sh@447 -- # tail -n +2 00:18:06.583 04:09:20 -- nvmf/common.sh@447 -- # head -n 1 00:18:06.583 04:09:20 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:06.583 04:09:20 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:06.583 04:09:20 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:06.583 04:09:20 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:06.583 04:09:20 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:06.583 04:09:20 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:06.583 04:09:20 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:06.583 04:09:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:06.583 04:09:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:06.583 04:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:06.583 04:09:20 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:06.583 04:09:20 -- nvmf/common.sh@470 -- # nvmfpid=337460 00:18:06.583 04:09:20 -- nvmf/common.sh@471 -- # waitforlisten 337460 00:18:06.583 04:09:20 -- common/autotest_common.sh@817 -- # '[' -z 337460 ']' 00:18:06.583 04:09:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.583 04:09:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:06.583 04:09:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.583 04:09:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:06.583 04:09:20 -- common/autotest_common.sh@10 -- # set +x 00:18:06.583 [2024-04-19 04:09:20.967786] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:06.583 [2024-04-19 04:09:20.967825] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.583 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.583 [2024-04-19 04:09:21.016432] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.583 [2024-04-19 04:09:21.088570] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.583 [2024-04-19 04:09:21.088605] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.583 [2024-04-19 04:09:21.088614] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.583 [2024-04-19 04:09:21.088619] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.583 [2024-04-19 04:09:21.088624] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.583 [2024-04-19 04:09:21.088723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.583 [2024-04-19 04:09:21.088797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.583 [2024-04-19 04:09:21.088903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.583 [2024-04-19 04:09:21.088905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:07.521 04:09:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:07.521 04:09:21 -- common/autotest_common.sh@850 -- # return 0 00:18:07.521 04:09:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:07.521 04:09:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:07.521 04:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:07.521 04:09:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.521 04:09:21 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:07.521 04:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.521 04:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:07.521 [2024-04-19 04:09:21.837238] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x100f9b0/0x1013ea0) succeed. 00:18:07.521 [2024-04-19 04:09:21.846942] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1010fa0/0x1055530) succeed. 00:18:07.521 04:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.521 04:09:21 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:07.521 04:09:21 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:07.521 04:09:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:07.521 04:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:07.521 04:09:21 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:07.521 04:09:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:07.521 04:09:21 -- target/shutdown.sh@28 -- # cat 00:18:07.521 04:09:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:07.521 04:09:21 -- target/shutdown.sh@28 -- # cat 00:18:07.521 04:09:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:07.521 04:09:21 -- target/shutdown.sh@28 -- # cat 00:18:07.521 04:09:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:07.521 04:09:21 -- target/shutdown.sh@28 -- # cat 00:18:07.521 04:09:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:07.521 04:09:21 -- target/shutdown.sh@28 -- # cat 00:18:07.521 04:09:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:07.521 04:09:21 -- target/shutdown.sh@28 -- # cat 00:18:07.521 04:09:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:07.521 04:09:21 -- target/shutdown.sh@28 -- # cat 00:18:07.521 04:09:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:07.521 04:09:22 -- target/shutdown.sh@28 -- # cat 00:18:07.521 04:09:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:07.521 04:09:22 -- target/shutdown.sh@28 -- # cat 00:18:07.521 04:09:22 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:07.521 04:09:22 -- target/shutdown.sh@28 -- # cat 00:18:07.521 04:09:22 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:07.521 04:09:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.521 04:09:22 -- common/autotest_common.sh@10 -- # set +x 00:18:07.521 Malloc1 00:18:07.780 [2024-04-19 04:09:22.055590] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:07.780 Malloc2 00:18:07.780 Malloc3 00:18:07.780 Malloc4 00:18:07.780 Malloc5 00:18:07.780 Malloc6 00:18:07.780 Malloc7 00:18:08.040 Malloc8 00:18:08.040 Malloc9 00:18:08.040 Malloc10 00:18:08.040 04:09:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:08.040 04:09:22 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:08.040 04:09:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:08.040 04:09:22 -- common/autotest_common.sh@10 -- # set +x 00:18:08.040 04:09:22 -- target/shutdown.sh@78 -- # perfpid=337771 00:18:08.040 04:09:22 -- target/shutdown.sh@79 -- # waitforlisten 337771 /var/tmp/bdevperf.sock 00:18:08.040 04:09:22 -- common/autotest_common.sh@817 -- # '[' -z 337771 ']' 00:18:08.040 04:09:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.040 04:09:22 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:18:08.040 04:09:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:08.040 04:09:22 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:08.040 04:09:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.040 04:09:22 -- nvmf/common.sh@521 -- # config=() 00:18:08.040 04:09:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:08.040 04:09:22 -- nvmf/common.sh@521 -- # local subsystem config 00:18:08.040 04:09:22 -- common/autotest_common.sh@10 -- # set +x 00:18:08.040 04:09:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:08.040 { 00:18:08.040 "params": { 00:18:08.040 "name": "Nvme$subsystem", 00:18:08.040 "trtype": "$TEST_TRANSPORT", 00:18:08.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.040 "adrfam": "ipv4", 00:18:08.040 "trsvcid": "$NVMF_PORT", 00:18:08.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.040 "hdgst": ${hdgst:-false}, 00:18:08.040 "ddgst": ${ddgst:-false} 00:18:08.040 }, 00:18:08.040 "method": "bdev_nvme_attach_controller" 00:18:08.040 } 00:18:08.040 EOF 00:18:08.040 )") 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # cat 00:18:08.040 04:09:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:08.040 { 00:18:08.040 "params": { 00:18:08.040 "name": "Nvme$subsystem", 00:18:08.040 "trtype": "$TEST_TRANSPORT", 00:18:08.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.040 "adrfam": "ipv4", 00:18:08.040 "trsvcid": "$NVMF_PORT", 00:18:08.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.040 "hdgst": ${hdgst:-false}, 00:18:08.040 "ddgst": ${ddgst:-false} 00:18:08.040 }, 00:18:08.040 "method": "bdev_nvme_attach_controller" 00:18:08.040 } 00:18:08.040 EOF 00:18:08.040 )") 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # cat 00:18:08.040 04:09:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:08.040 { 00:18:08.040 "params": { 00:18:08.040 "name": "Nvme$subsystem", 00:18:08.040 "trtype": "$TEST_TRANSPORT", 00:18:08.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.040 "adrfam": "ipv4", 00:18:08.040 "trsvcid": "$NVMF_PORT", 00:18:08.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.040 "hdgst": ${hdgst:-false}, 00:18:08.040 "ddgst": ${ddgst:-false} 00:18:08.040 }, 00:18:08.040 "method": "bdev_nvme_attach_controller" 00:18:08.040 } 00:18:08.040 EOF 00:18:08.040 )") 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # cat 00:18:08.040 04:09:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:08.040 { 00:18:08.040 "params": { 00:18:08.040 "name": "Nvme$subsystem", 00:18:08.040 "trtype": "$TEST_TRANSPORT", 00:18:08.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.040 "adrfam": "ipv4", 00:18:08.040 "trsvcid": "$NVMF_PORT", 00:18:08.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.040 "hdgst": ${hdgst:-false}, 00:18:08.040 "ddgst": ${ddgst:-false} 00:18:08.040 }, 00:18:08.040 "method": "bdev_nvme_attach_controller" 00:18:08.040 } 00:18:08.040 EOF 00:18:08.040 )") 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # cat 00:18:08.040 04:09:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:08.040 { 00:18:08.040 "params": { 00:18:08.040 "name": "Nvme$subsystem", 00:18:08.040 "trtype": "$TEST_TRANSPORT", 00:18:08.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.040 "adrfam": "ipv4", 00:18:08.040 "trsvcid": "$NVMF_PORT", 00:18:08.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.040 "hdgst": ${hdgst:-false}, 00:18:08.040 "ddgst": ${ddgst:-false} 00:18:08.040 }, 00:18:08.040 "method": "bdev_nvme_attach_controller" 00:18:08.040 } 00:18:08.040 EOF 00:18:08.040 )") 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # cat 00:18:08.040 04:09:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:08.040 { 00:18:08.040 "params": { 00:18:08.040 "name": "Nvme$subsystem", 00:18:08.040 "trtype": "$TEST_TRANSPORT", 00:18:08.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.040 "adrfam": "ipv4", 00:18:08.040 "trsvcid": "$NVMF_PORT", 00:18:08.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.040 "hdgst": ${hdgst:-false}, 00:18:08.040 "ddgst": ${ddgst:-false} 00:18:08.040 }, 00:18:08.040 "method": "bdev_nvme_attach_controller" 00:18:08.040 } 00:18:08.040 EOF 00:18:08.040 )") 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # cat 00:18:08.040 [2024-04-19 04:09:22.530722] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:08.040 [2024-04-19 04:09:22.530766] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:08.040 04:09:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:08.040 { 00:18:08.040 "params": { 00:18:08.040 "name": "Nvme$subsystem", 00:18:08.040 "trtype": "$TEST_TRANSPORT", 00:18:08.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.040 "adrfam": "ipv4", 00:18:08.040 "trsvcid": "$NVMF_PORT", 00:18:08.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.040 "hdgst": ${hdgst:-false}, 00:18:08.040 "ddgst": ${ddgst:-false} 00:18:08.040 }, 00:18:08.040 "method": "bdev_nvme_attach_controller" 00:18:08.040 } 00:18:08.040 EOF 00:18:08.040 )") 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # cat 00:18:08.040 04:09:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:08.040 04:09:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:08.040 { 00:18:08.040 "params": { 00:18:08.040 "name": "Nvme$subsystem", 00:18:08.040 "trtype": "$TEST_TRANSPORT", 00:18:08.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.040 "adrfam": "ipv4", 00:18:08.040 "trsvcid": "$NVMF_PORT", 00:18:08.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.040 "hdgst": ${hdgst:-false}, 00:18:08.040 "ddgst": ${ddgst:-false} 00:18:08.040 }, 00:18:08.040 "method": "bdev_nvme_attach_controller" 00:18:08.040 } 00:18:08.040 EOF 00:18:08.040 )") 00:18:08.041 04:09:22 -- nvmf/common.sh@543 -- # cat 00:18:08.041 04:09:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:08.041 04:09:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:08.041 { 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme$subsystem", 00:18:08.041 "trtype": "$TEST_TRANSPORT", 00:18:08.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "$NVMF_PORT", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.041 "hdgst": ${hdgst:-false}, 00:18:08.041 "ddgst": ${ddgst:-false} 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 } 00:18:08.041 EOF 00:18:08.041 )") 00:18:08.041 04:09:22 -- nvmf/common.sh@543 -- # cat 00:18:08.041 04:09:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:08.041 04:09:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:08.041 { 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme$subsystem", 00:18:08.041 "trtype": "$TEST_TRANSPORT", 00:18:08.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "$NVMF_PORT", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.041 "hdgst": ${hdgst:-false}, 00:18:08.041 "ddgst": ${ddgst:-false} 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 } 00:18:08.041 EOF 00:18:08.041 )") 00:18:08.041 04:09:22 -- nvmf/common.sh@543 -- # cat 00:18:08.041 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.041 04:09:22 -- nvmf/common.sh@545 -- # jq . 00:18:08.041 04:09:22 -- nvmf/common.sh@546 -- # IFS=, 00:18:08.041 04:09:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme1", 00:18:08.041 "trtype": "rdma", 00:18:08.041 "traddr": "192.168.100.8", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "4420", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.041 "hdgst": false, 00:18:08.041 "ddgst": false 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 },{ 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme2", 00:18:08.041 "trtype": "rdma", 00:18:08.041 "traddr": "192.168.100.8", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "4420", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:08.041 "hdgst": false, 00:18:08.041 "ddgst": false 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 },{ 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme3", 00:18:08.041 "trtype": "rdma", 00:18:08.041 "traddr": "192.168.100.8", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "4420", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:08.041 "hdgst": false, 00:18:08.041 "ddgst": false 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 },{ 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme4", 00:18:08.041 "trtype": "rdma", 00:18:08.041 "traddr": "192.168.100.8", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "4420", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:08.041 "hdgst": false, 00:18:08.041 "ddgst": false 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 },{ 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme5", 00:18:08.041 "trtype": "rdma", 00:18:08.041 "traddr": "192.168.100.8", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "4420", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:08.041 "hdgst": false, 00:18:08.041 "ddgst": false 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 },{ 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme6", 00:18:08.041 "trtype": "rdma", 00:18:08.041 "traddr": "192.168.100.8", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "4420", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:08.041 "hdgst": false, 00:18:08.041 "ddgst": false 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 },{ 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme7", 00:18:08.041 "trtype": "rdma", 00:18:08.041 "traddr": "192.168.100.8", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "4420", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:08.041 "hdgst": false, 00:18:08.041 "ddgst": false 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 },{ 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme8", 00:18:08.041 "trtype": "rdma", 00:18:08.041 "traddr": "192.168.100.8", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "4420", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:08.041 "hdgst": false, 00:18:08.041 "ddgst": false 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 },{ 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme9", 00:18:08.041 "trtype": "rdma", 00:18:08.041 "traddr": "192.168.100.8", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "4420", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:08.041 "hdgst": false, 00:18:08.041 "ddgst": false 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 },{ 00:18:08.041 "params": { 00:18:08.041 "name": "Nvme10", 00:18:08.041 "trtype": "rdma", 00:18:08.041 "traddr": "192.168.100.8", 00:18:08.041 "adrfam": "ipv4", 00:18:08.041 "trsvcid": "4420", 00:18:08.041 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:08.041 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:08.041 "hdgst": false, 00:18:08.041 "ddgst": false 00:18:08.041 }, 00:18:08.041 "method": "bdev_nvme_attach_controller" 00:18:08.041 }' 00:18:08.300 [2024-04-19 04:09:22.583614] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.300 [2024-04-19 04:09:22.650303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.246 04:09:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:09.246 04:09:23 -- common/autotest_common.sh@850 -- # return 0 00:18:09.246 04:09:23 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:09.246 04:09:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.246 04:09:23 -- common/autotest_common.sh@10 -- # set +x 00:18:09.246 04:09:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.246 04:09:23 -- target/shutdown.sh@83 -- # kill -9 337771 00:18:09.246 04:09:23 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:18:09.246 04:09:23 -- target/shutdown.sh@87 -- # sleep 1 00:18:10.182 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 337771 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:18:10.182 04:09:24 -- target/shutdown.sh@88 -- # kill -0 337460 00:18:10.182 04:09:24 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:10.182 04:09:24 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:10.182 04:09:24 -- nvmf/common.sh@521 -- # config=() 00:18:10.182 04:09:24 -- nvmf/common.sh@521 -- # local subsystem config 00:18:10.182 04:09:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:10.182 04:09:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:10.182 { 00:18:10.182 "params": { 00:18:10.182 "name": "Nvme$subsystem", 00:18:10.182 "trtype": "$TEST_TRANSPORT", 00:18:10.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.182 "adrfam": "ipv4", 00:18:10.182 "trsvcid": "$NVMF_PORT", 00:18:10.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.182 "hdgst": ${hdgst:-false}, 00:18:10.182 "ddgst": ${ddgst:-false} 00:18:10.182 }, 00:18:10.182 "method": "bdev_nvme_attach_controller" 00:18:10.182 } 00:18:10.182 EOF 00:18:10.182 )") 00:18:10.182 04:09:24 -- nvmf/common.sh@543 -- # cat 00:18:10.182 04:09:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:10.182 04:09:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:10.182 { 00:18:10.182 "params": { 00:18:10.182 "name": "Nvme$subsystem", 00:18:10.182 "trtype": "$TEST_TRANSPORT", 00:18:10.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.182 "adrfam": "ipv4", 00:18:10.182 "trsvcid": "$NVMF_PORT", 00:18:10.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.182 "hdgst": ${hdgst:-false}, 00:18:10.182 "ddgst": ${ddgst:-false} 00:18:10.182 }, 00:18:10.182 "method": "bdev_nvme_attach_controller" 00:18:10.182 } 00:18:10.182 EOF 00:18:10.182 )") 00:18:10.182 04:09:24 -- nvmf/common.sh@543 -- # cat 00:18:10.182 04:09:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:10.182 04:09:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:10.182 { 00:18:10.182 "params": { 00:18:10.182 "name": "Nvme$subsystem", 00:18:10.182 "trtype": "$TEST_TRANSPORT", 00:18:10.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.182 "adrfam": "ipv4", 00:18:10.182 "trsvcid": "$NVMF_PORT", 00:18:10.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.182 "hdgst": ${hdgst:-false}, 00:18:10.182 "ddgst": ${ddgst:-false} 00:18:10.182 }, 00:18:10.182 "method": "bdev_nvme_attach_controller" 00:18:10.182 } 00:18:10.182 EOF 00:18:10.182 )") 00:18:10.182 04:09:24 -- nvmf/common.sh@543 -- # cat 00:18:10.182 04:09:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:10.182 04:09:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:10.182 { 00:18:10.182 "params": { 00:18:10.182 "name": "Nvme$subsystem", 00:18:10.182 "trtype": "$TEST_TRANSPORT", 00:18:10.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.182 "adrfam": "ipv4", 00:18:10.182 "trsvcid": "$NVMF_PORT", 00:18:10.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.182 "hdgst": ${hdgst:-false}, 00:18:10.183 "ddgst": ${ddgst:-false} 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 } 00:18:10.183 EOF 00:18:10.183 )") 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # cat 00:18:10.183 04:09:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:10.183 { 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme$subsystem", 00:18:10.183 "trtype": "$TEST_TRANSPORT", 00:18:10.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.183 "adrfam": "ipv4", 00:18:10.183 "trsvcid": "$NVMF_PORT", 00:18:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.183 "hdgst": ${hdgst:-false}, 00:18:10.183 "ddgst": ${ddgst:-false} 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 } 00:18:10.183 EOF 00:18:10.183 )") 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # cat 00:18:10.183 04:09:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:10.183 { 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme$subsystem", 00:18:10.183 "trtype": "$TEST_TRANSPORT", 00:18:10.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.183 "adrfam": "ipv4", 00:18:10.183 "trsvcid": "$NVMF_PORT", 00:18:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.183 "hdgst": ${hdgst:-false}, 00:18:10.183 "ddgst": ${ddgst:-false} 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 } 00:18:10.183 EOF 00:18:10.183 )") 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # cat 00:18:10.183 04:09:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:10.183 { 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme$subsystem", 00:18:10.183 "trtype": "$TEST_TRANSPORT", 00:18:10.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.183 "adrfam": "ipv4", 00:18:10.183 "trsvcid": "$NVMF_PORT", 00:18:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.183 "hdgst": ${hdgst:-false}, 00:18:10.183 "ddgst": ${ddgst:-false} 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 } 00:18:10.183 EOF 00:18:10.183 )") 00:18:10.183 [2024-04-19 04:09:24.552082] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:10.183 [2024-04-19 04:09:24.552125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338072 ] 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # cat 00:18:10.183 04:09:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:10.183 { 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme$subsystem", 00:18:10.183 "trtype": "$TEST_TRANSPORT", 00:18:10.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.183 "adrfam": "ipv4", 00:18:10.183 "trsvcid": "$NVMF_PORT", 00:18:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.183 "hdgst": ${hdgst:-false}, 00:18:10.183 "ddgst": ${ddgst:-false} 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 } 00:18:10.183 EOF 00:18:10.183 )") 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # cat 00:18:10.183 04:09:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:10.183 { 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme$subsystem", 00:18:10.183 "trtype": "$TEST_TRANSPORT", 00:18:10.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.183 "adrfam": "ipv4", 00:18:10.183 "trsvcid": "$NVMF_PORT", 00:18:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.183 "hdgst": ${hdgst:-false}, 00:18:10.183 "ddgst": ${ddgst:-false} 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 } 00:18:10.183 EOF 00:18:10.183 )") 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # cat 00:18:10.183 04:09:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:10.183 { 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme$subsystem", 00:18:10.183 "trtype": "$TEST_TRANSPORT", 00:18:10.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.183 "adrfam": "ipv4", 00:18:10.183 "trsvcid": "$NVMF_PORT", 00:18:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.183 "hdgst": ${hdgst:-false}, 00:18:10.183 "ddgst": ${ddgst:-false} 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 } 00:18:10.183 EOF 00:18:10.183 )") 00:18:10.183 04:09:24 -- nvmf/common.sh@543 -- # cat 00:18:10.183 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.183 04:09:24 -- nvmf/common.sh@545 -- # jq . 00:18:10.183 04:09:24 -- nvmf/common.sh@546 -- # IFS=, 00:18:10.183 04:09:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme1", 00:18:10.183 "trtype": "rdma", 00:18:10.183 "traddr": "192.168.100.8", 00:18:10.183 "adrfam": "ipv4", 00:18:10.183 "trsvcid": "4420", 00:18:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.183 "hdgst": false, 00:18:10.183 "ddgst": false 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 },{ 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme2", 00:18:10.183 "trtype": "rdma", 00:18:10.183 "traddr": "192.168.100.8", 00:18:10.183 "adrfam": "ipv4", 00:18:10.183 "trsvcid": "4420", 00:18:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:10.183 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:10.183 "hdgst": false, 00:18:10.183 "ddgst": false 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 },{ 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme3", 00:18:10.183 "trtype": "rdma", 00:18:10.183 "traddr": "192.168.100.8", 00:18:10.183 "adrfam": "ipv4", 00:18:10.183 "trsvcid": "4420", 00:18:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:10.183 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:10.183 "hdgst": false, 00:18:10.183 "ddgst": false 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 },{ 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme4", 00:18:10.183 "trtype": "rdma", 00:18:10.183 "traddr": "192.168.100.8", 00:18:10.183 "adrfam": "ipv4", 00:18:10.183 "trsvcid": "4420", 00:18:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:10.183 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:10.183 "hdgst": false, 00:18:10.183 "ddgst": false 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 },{ 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme5", 00:18:10.183 "trtype": "rdma", 00:18:10.183 "traddr": "192.168.100.8", 00:18:10.183 "adrfam": "ipv4", 00:18:10.183 "trsvcid": "4420", 00:18:10.183 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:10.183 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:10.183 "hdgst": false, 00:18:10.183 "ddgst": false 00:18:10.183 }, 00:18:10.183 "method": "bdev_nvme_attach_controller" 00:18:10.183 },{ 00:18:10.183 "params": { 00:18:10.183 "name": "Nvme6", 00:18:10.183 "trtype": "rdma", 00:18:10.183 "traddr": "192.168.100.8", 00:18:10.183 "adrfam": "ipv4", 00:18:10.184 "trsvcid": "4420", 00:18:10.184 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:10.184 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:10.184 "hdgst": false, 00:18:10.184 "ddgst": false 00:18:10.184 }, 00:18:10.184 "method": "bdev_nvme_attach_controller" 00:18:10.184 },{ 00:18:10.184 "params": { 00:18:10.184 "name": "Nvme7", 00:18:10.184 "trtype": "rdma", 00:18:10.184 "traddr": "192.168.100.8", 00:18:10.184 "adrfam": "ipv4", 00:18:10.184 "trsvcid": "4420", 00:18:10.184 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:10.184 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:10.184 "hdgst": false, 00:18:10.184 "ddgst": false 00:18:10.184 }, 00:18:10.184 "method": "bdev_nvme_attach_controller" 00:18:10.184 },{ 00:18:10.184 "params": { 00:18:10.184 "name": "Nvme8", 00:18:10.184 "trtype": "rdma", 00:18:10.184 "traddr": "192.168.100.8", 00:18:10.184 "adrfam": "ipv4", 00:18:10.184 "trsvcid": "4420", 00:18:10.184 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:10.184 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:10.184 "hdgst": false, 00:18:10.184 "ddgst": false 00:18:10.184 }, 00:18:10.184 "method": "bdev_nvme_attach_controller" 00:18:10.184 },{ 00:18:10.184 "params": { 00:18:10.184 "name": "Nvme9", 00:18:10.184 "trtype": "rdma", 00:18:10.184 "traddr": "192.168.100.8", 00:18:10.184 "adrfam": "ipv4", 00:18:10.184 "trsvcid": "4420", 00:18:10.184 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:10.184 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:10.184 "hdgst": false, 00:18:10.184 "ddgst": false 00:18:10.184 }, 00:18:10.184 "method": "bdev_nvme_attach_controller" 00:18:10.184 },{ 00:18:10.184 "params": { 00:18:10.184 "name": "Nvme10", 00:18:10.184 "trtype": "rdma", 00:18:10.184 "traddr": "192.168.100.8", 00:18:10.184 "adrfam": "ipv4", 00:18:10.184 "trsvcid": "4420", 00:18:10.184 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:10.184 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:10.184 "hdgst": false, 00:18:10.184 "ddgst": false 00:18:10.184 }, 00:18:10.184 "method": "bdev_nvme_attach_controller" 00:18:10.184 }' 00:18:10.184 [2024-04-19 04:09:24.603931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.184 [2024-04-19 04:09:24.671312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.120 Running I/O for 1 seconds... 00:18:12.497 00:18:12.497 Latency(us) 00:18:12.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.497 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.497 Verification LBA range: start 0x0 length 0x400 00:18:12.497 Nvme1n1 : 1.15 414.78 25.92 0.00 0.00 151961.23 5679.79 220589.32 00:18:12.497 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.497 Verification LBA range: start 0x0 length 0x400 00:18:12.497 Nvme2n1 : 1.15 416.11 26.01 0.00 0.00 148969.04 10437.21 156898.04 00:18:12.497 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.497 Verification LBA range: start 0x0 length 0x400 00:18:12.497 Nvme3n1 : 1.15 415.71 25.98 0.00 0.00 147236.13 17379.18 150684.25 00:18:12.497 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.497 Verification LBA range: start 0x0 length 0x400 00:18:12.497 Nvme4n1 : 1.16 417.90 26.12 0.00 0.00 144720.03 19320.98 145247.19 00:18:12.497 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.497 Verification LBA range: start 0x0 length 0x400 00:18:12.497 Nvme5n1 : 1.16 401.18 25.07 0.00 0.00 147988.74 19320.98 135149.80 00:18:12.497 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.497 Verification LBA range: start 0x0 length 0x400 00:18:12.497 Nvme6n1 : 1.16 414.63 25.91 0.00 0.00 141914.77 19418.07 128159.29 00:18:12.497 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.497 Verification LBA range: start 0x0 length 0x400 00:18:12.497 Nvme7n1 : 1.16 414.26 25.89 0.00 0.00 140145.30 19612.25 121945.51 00:18:12.497 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.497 Verification LBA range: start 0x0 length 0x400 00:18:12.497 Nvme8n1 : 1.16 411.30 25.71 0.00 0.00 139117.98 19806.44 114178.28 00:18:12.497 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.497 Verification LBA range: start 0x0 length 0x400 00:18:12.497 Nvme9n1 : 1.16 439.53 27.47 0.00 0.00 129707.43 3713.71 104080.88 00:18:12.497 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:12.497 Verification LBA range: start 0x0 length 0x400 00:18:12.497 Nvme10n1 : 1.16 330.48 20.66 0.00 0.00 170149.23 10000.31 302921.96 00:18:12.497 =================================================================================================================== 00:18:12.497 Total : 4075.88 254.74 0.00 0.00 145586.71 3713.71 302921.96 00:18:12.497 04:09:26 -- target/shutdown.sh@94 -- # stoptarget 00:18:12.497 04:09:26 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:12.497 04:09:26 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:12.498 04:09:26 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:12.498 04:09:26 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:12.498 04:09:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:12.498 04:09:26 -- nvmf/common.sh@117 -- # sync 00:18:12.498 04:09:26 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:12.498 04:09:26 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:12.498 04:09:26 -- nvmf/common.sh@120 -- # set +e 00:18:12.498 04:09:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.498 04:09:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:12.498 rmmod nvme_rdma 00:18:12.498 rmmod nvme_fabrics 00:18:12.756 04:09:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.757 04:09:27 -- nvmf/common.sh@124 -- # set -e 00:18:12.757 04:09:27 -- nvmf/common.sh@125 -- # return 0 00:18:12.757 04:09:27 -- nvmf/common.sh@478 -- # '[' -n 337460 ']' 00:18:12.757 04:09:27 -- nvmf/common.sh@479 -- # killprocess 337460 00:18:12.757 04:09:27 -- common/autotest_common.sh@936 -- # '[' -z 337460 ']' 00:18:12.757 04:09:27 -- common/autotest_common.sh@940 -- # kill -0 337460 00:18:12.757 04:09:27 -- common/autotest_common.sh@941 -- # uname 00:18:12.757 04:09:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.757 04:09:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 337460 00:18:12.757 04:09:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:12.757 04:09:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:12.757 04:09:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 337460' 00:18:12.757 killing process with pid 337460 00:18:12.757 04:09:27 -- common/autotest_common.sh@955 -- # kill 337460 00:18:12.757 04:09:27 -- common/autotest_common.sh@960 -- # wait 337460 00:18:13.020 04:09:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:13.020 04:09:27 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:13.020 00:18:13.020 real 0m12.196s 00:18:13.020 user 0m30.087s 00:18:13.020 sys 0m5.141s 00:18:13.020 04:09:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:13.020 04:09:27 -- common/autotest_common.sh@10 -- # set +x 00:18:13.020 ************************************ 00:18:13.020 END TEST nvmf_shutdown_tc1 00:18:13.021 ************************************ 00:18:13.279 04:09:27 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:18:13.279 04:09:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:13.279 04:09:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:13.279 04:09:27 -- common/autotest_common.sh@10 -- # set +x 00:18:13.279 ************************************ 00:18:13.279 START TEST nvmf_shutdown_tc2 00:18:13.279 ************************************ 00:18:13.279 04:09:27 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:18:13.279 04:09:27 -- target/shutdown.sh@99 -- # starttarget 00:18:13.279 04:09:27 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:13.279 04:09:27 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:13.279 04:09:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.279 04:09:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:13.279 04:09:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:13.279 04:09:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:13.279 04:09:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.279 04:09:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.279 04:09:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.279 04:09:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:13.279 04:09:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:13.279 04:09:27 -- common/autotest_common.sh@10 -- # set +x 00:18:13.279 04:09:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:13.279 04:09:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:13.279 04:09:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:13.279 04:09:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:13.279 04:09:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:13.279 04:09:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:13.279 04:09:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:13.279 04:09:27 -- nvmf/common.sh@295 -- # net_devs=() 00:18:13.279 04:09:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:13.279 04:09:27 -- nvmf/common.sh@296 -- # e810=() 00:18:13.279 04:09:27 -- nvmf/common.sh@296 -- # local -ga e810 00:18:13.279 04:09:27 -- nvmf/common.sh@297 -- # x722=() 00:18:13.279 04:09:27 -- nvmf/common.sh@297 -- # local -ga x722 00:18:13.279 04:09:27 -- nvmf/common.sh@298 -- # mlx=() 00:18:13.279 04:09:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:13.279 04:09:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.279 04:09:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.279 04:09:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.279 04:09:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.279 04:09:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.279 04:09:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.279 04:09:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.279 04:09:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.279 04:09:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.279 04:09:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.279 04:09:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.279 04:09:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:13.279 04:09:27 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:13.279 04:09:27 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:13.279 04:09:27 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:13.279 04:09:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:13.279 04:09:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.279 04:09:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:13.279 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:13.279 04:09:27 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:13.279 04:09:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.279 04:09:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:13.279 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:13.279 04:09:27 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:13.279 04:09:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:13.279 04:09:27 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:13.279 04:09:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.279 04:09:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.279 04:09:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:13.279 04:09:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.279 04:09:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:13.279 Found net devices under 0000:18:00.0: mlx_0_0 00:18:13.279 04:09:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.279 04:09:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.279 04:09:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.279 04:09:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:13.279 04:09:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.279 04:09:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:13.280 Found net devices under 0000:18:00.1: mlx_0_1 00:18:13.280 04:09:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.280 04:09:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:13.280 04:09:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:13.280 04:09:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:13.280 04:09:27 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:13.280 04:09:27 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:13.280 04:09:27 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:13.280 04:09:27 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:13.280 04:09:27 -- nvmf/common.sh@58 -- # uname 00:18:13.280 04:09:27 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:13.280 04:09:27 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:13.280 04:09:27 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:13.280 04:09:27 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:13.280 04:09:27 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:13.280 04:09:27 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:13.280 04:09:27 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:13.280 04:09:27 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:13.280 04:09:27 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:13.280 04:09:27 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:13.280 04:09:27 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:13.280 04:09:27 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:13.280 04:09:27 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:13.280 04:09:27 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:13.280 04:09:27 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:13.280 04:09:27 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:13.280 04:09:27 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:13.280 04:09:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.280 04:09:27 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:13.280 04:09:27 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:13.280 04:09:27 -- nvmf/common.sh@105 -- # continue 2 00:18:13.280 04:09:27 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:13.538 04:09:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.538 04:09:27 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:13.538 04:09:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.538 04:09:27 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:13.538 04:09:27 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:13.538 04:09:27 -- nvmf/common.sh@105 -- # continue 2 00:18:13.538 04:09:27 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:13.538 04:09:27 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:13.538 04:09:27 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:13.538 04:09:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:13.538 04:09:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:13.538 04:09:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:13.538 04:09:27 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:13.539 04:09:27 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:13.539 04:09:27 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:13.539 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:13.539 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:13.539 altname enp24s0f0np0 00:18:13.539 altname ens785f0np0 00:18:13.539 inet 192.168.100.8/24 scope global mlx_0_0 00:18:13.539 valid_lft forever preferred_lft forever 00:18:13.539 04:09:27 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:13.539 04:09:27 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:13.539 04:09:27 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:13.539 04:09:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:13.539 04:09:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:13.539 04:09:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:13.539 04:09:27 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:13.539 04:09:27 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:13.539 04:09:27 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:13.539 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:13.539 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:13.539 altname enp24s0f1np1 00:18:13.539 altname ens785f1np1 00:18:13.539 inet 192.168.100.9/24 scope global mlx_0_1 00:18:13.539 valid_lft forever preferred_lft forever 00:18:13.539 04:09:27 -- nvmf/common.sh@411 -- # return 0 00:18:13.539 04:09:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:13.539 04:09:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:13.539 04:09:27 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:13.539 04:09:27 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:13.539 04:09:27 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:13.539 04:09:27 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:13.539 04:09:27 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:13.539 04:09:27 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:13.539 04:09:27 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:13.539 04:09:27 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:13.539 04:09:27 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:13.539 04:09:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.539 04:09:27 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:13.539 04:09:27 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:13.539 04:09:27 -- nvmf/common.sh@105 -- # continue 2 00:18:13.539 04:09:27 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:13.539 04:09:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.539 04:09:27 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:13.539 04:09:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.539 04:09:27 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:13.539 04:09:27 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:13.539 04:09:27 -- nvmf/common.sh@105 -- # continue 2 00:18:13.539 04:09:27 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:13.539 04:09:27 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:13.539 04:09:27 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:13.539 04:09:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:13.539 04:09:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:13.539 04:09:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:13.539 04:09:27 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:13.539 04:09:27 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:13.539 04:09:27 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:13.539 04:09:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:13.539 04:09:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:13.539 04:09:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:13.539 04:09:27 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:13.539 192.168.100.9' 00:18:13.539 04:09:27 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:13.539 192.168.100.9' 00:18:13.539 04:09:27 -- nvmf/common.sh@446 -- # head -n 1 00:18:13.539 04:09:27 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:13.539 04:09:27 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:13.539 192.168.100.9' 00:18:13.539 04:09:27 -- nvmf/common.sh@447 -- # tail -n +2 00:18:13.539 04:09:27 -- nvmf/common.sh@447 -- # head -n 1 00:18:13.539 04:09:27 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:13.539 04:09:27 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:13.539 04:09:27 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:13.539 04:09:27 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:13.539 04:09:27 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:13.539 04:09:27 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:13.539 04:09:27 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:13.539 04:09:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:13.539 04:09:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:13.539 04:09:27 -- common/autotest_common.sh@10 -- # set +x 00:18:13.539 04:09:27 -- nvmf/common.sh@470 -- # nvmfpid=338959 00:18:13.539 04:09:27 -- nvmf/common.sh@471 -- # waitforlisten 338959 00:18:13.539 04:09:27 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:13.539 04:09:27 -- common/autotest_common.sh@817 -- # '[' -z 338959 ']' 00:18:13.539 04:09:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.539 04:09:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:13.539 04:09:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.539 04:09:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:13.539 04:09:27 -- common/autotest_common.sh@10 -- # set +x 00:18:13.539 [2024-04-19 04:09:27.984635] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:13.539 [2024-04-19 04:09:27.984678] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.539 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.539 [2024-04-19 04:09:28.034079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:13.798 [2024-04-19 04:09:28.106656] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.798 [2024-04-19 04:09:28.106689] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.798 [2024-04-19 04:09:28.106695] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.798 [2024-04-19 04:09:28.106701] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.798 [2024-04-19 04:09:28.106705] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.798 [2024-04-19 04:09:28.106799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.798 [2024-04-19 04:09:28.106891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:13.798 [2024-04-19 04:09:28.106997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.798 [2024-04-19 04:09:28.106999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:14.367 04:09:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:14.367 04:09:28 -- common/autotest_common.sh@850 -- # return 0 00:18:14.367 04:09:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:14.367 04:09:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:14.367 04:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:14.367 04:09:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.367 04:09:28 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:14.367 04:09:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.367 04:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:14.367 [2024-04-19 04:09:28.820038] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12939b0/0x1297ea0) succeed. 00:18:14.367 [2024-04-19 04:09:28.829369] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1294fa0/0x12d9530) succeed. 00:18:14.626 04:09:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.626 04:09:28 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:14.626 04:09:28 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:14.626 04:09:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:14.626 04:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:14.626 04:09:28 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:14.626 04:09:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:14.626 04:09:28 -- target/shutdown.sh@28 -- # cat 00:18:14.626 04:09:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:14.626 04:09:28 -- target/shutdown.sh@28 -- # cat 00:18:14.626 04:09:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:14.626 04:09:28 -- target/shutdown.sh@28 -- # cat 00:18:14.626 04:09:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:14.626 04:09:28 -- target/shutdown.sh@28 -- # cat 00:18:14.626 04:09:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:14.626 04:09:28 -- target/shutdown.sh@28 -- # cat 00:18:14.626 04:09:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:14.626 04:09:28 -- target/shutdown.sh@28 -- # cat 00:18:14.626 04:09:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:14.626 04:09:28 -- target/shutdown.sh@28 -- # cat 00:18:14.626 04:09:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:14.626 04:09:28 -- target/shutdown.sh@28 -- # cat 00:18:14.626 04:09:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:14.626 04:09:28 -- target/shutdown.sh@28 -- # cat 00:18:14.626 04:09:28 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:14.626 04:09:28 -- target/shutdown.sh@28 -- # cat 00:18:14.626 04:09:28 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:14.626 04:09:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.626 04:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:14.626 Malloc1 00:18:14.626 [2024-04-19 04:09:29.028119] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:14.626 Malloc2 00:18:14.626 Malloc3 00:18:14.626 Malloc4 00:18:14.886 Malloc5 00:18:14.886 Malloc6 00:18:14.886 Malloc7 00:18:14.886 Malloc8 00:18:14.886 Malloc9 00:18:14.886 Malloc10 00:18:15.145 04:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.145 04:09:29 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:15.145 04:09:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:15.145 04:09:29 -- common/autotest_common.sh@10 -- # set +x 00:18:15.145 04:09:29 -- target/shutdown.sh@103 -- # perfpid=339269 00:18:15.145 04:09:29 -- target/shutdown.sh@104 -- # waitforlisten 339269 /var/tmp/bdevperf.sock 00:18:15.145 04:09:29 -- common/autotest_common.sh@817 -- # '[' -z 339269 ']' 00:18:15.145 04:09:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.145 04:09:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:15.145 04:09:29 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:15.145 04:09:29 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:15.145 04:09:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.145 04:09:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:15.145 04:09:29 -- nvmf/common.sh@521 -- # config=() 00:18:15.145 04:09:29 -- common/autotest_common.sh@10 -- # set +x 00:18:15.145 04:09:29 -- nvmf/common.sh@521 -- # local subsystem config 00:18:15.145 04:09:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:15.145 04:09:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:15.145 { 00:18:15.145 "params": { 00:18:15.145 "name": "Nvme$subsystem", 00:18:15.145 "trtype": "$TEST_TRANSPORT", 00:18:15.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.145 "adrfam": "ipv4", 00:18:15.145 "trsvcid": "$NVMF_PORT", 00:18:15.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.145 "hdgst": ${hdgst:-false}, 00:18:15.145 "ddgst": ${ddgst:-false} 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 } 00:18:15.146 EOF 00:18:15.146 )") 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # cat 00:18:15.146 04:09:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:15.146 { 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme$subsystem", 00:18:15.146 "trtype": "$TEST_TRANSPORT", 00:18:15.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "$NVMF_PORT", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.146 "hdgst": ${hdgst:-false}, 00:18:15.146 "ddgst": ${ddgst:-false} 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 } 00:18:15.146 EOF 00:18:15.146 )") 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # cat 00:18:15.146 04:09:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:15.146 { 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme$subsystem", 00:18:15.146 "trtype": "$TEST_TRANSPORT", 00:18:15.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "$NVMF_PORT", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.146 "hdgst": ${hdgst:-false}, 00:18:15.146 "ddgst": ${ddgst:-false} 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 } 00:18:15.146 EOF 00:18:15.146 )") 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # cat 00:18:15.146 04:09:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:15.146 { 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme$subsystem", 00:18:15.146 "trtype": "$TEST_TRANSPORT", 00:18:15.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "$NVMF_PORT", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.146 "hdgst": ${hdgst:-false}, 00:18:15.146 "ddgst": ${ddgst:-false} 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 } 00:18:15.146 EOF 00:18:15.146 )") 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # cat 00:18:15.146 04:09:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:15.146 { 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme$subsystem", 00:18:15.146 "trtype": "$TEST_TRANSPORT", 00:18:15.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "$NVMF_PORT", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.146 "hdgst": ${hdgst:-false}, 00:18:15.146 "ddgst": ${ddgst:-false} 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 } 00:18:15.146 EOF 00:18:15.146 )") 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # cat 00:18:15.146 04:09:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:15.146 { 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme$subsystem", 00:18:15.146 "trtype": "$TEST_TRANSPORT", 00:18:15.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "$NVMF_PORT", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.146 "hdgst": ${hdgst:-false}, 00:18:15.146 "ddgst": ${ddgst:-false} 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 } 00:18:15.146 EOF 00:18:15.146 )") 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # cat 00:18:15.146 04:09:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:15.146 { 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme$subsystem", 00:18:15.146 "trtype": "$TEST_TRANSPORT", 00:18:15.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "$NVMF_PORT", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.146 "hdgst": ${hdgst:-false}, 00:18:15.146 "ddgst": ${ddgst:-false} 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 } 00:18:15.146 EOF 00:18:15.146 )") 00:18:15.146 [2024-04-19 04:09:29.494827] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:15.146 [2024-04-19 04:09:29.494873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339269 ] 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # cat 00:18:15.146 04:09:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:15.146 { 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme$subsystem", 00:18:15.146 "trtype": "$TEST_TRANSPORT", 00:18:15.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "$NVMF_PORT", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.146 "hdgst": ${hdgst:-false}, 00:18:15.146 "ddgst": ${ddgst:-false} 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 } 00:18:15.146 EOF 00:18:15.146 )") 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # cat 00:18:15.146 04:09:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:15.146 { 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme$subsystem", 00:18:15.146 "trtype": "$TEST_TRANSPORT", 00:18:15.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "$NVMF_PORT", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.146 "hdgst": ${hdgst:-false}, 00:18:15.146 "ddgst": ${ddgst:-false} 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 } 00:18:15.146 EOF 00:18:15.146 )") 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # cat 00:18:15.146 04:09:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:15.146 { 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme$subsystem", 00:18:15.146 "trtype": "$TEST_TRANSPORT", 00:18:15.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "$NVMF_PORT", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.146 "hdgst": ${hdgst:-false}, 00:18:15.146 "ddgst": ${ddgst:-false} 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 } 00:18:15.146 EOF 00:18:15.146 )") 00:18:15.146 04:09:29 -- nvmf/common.sh@543 -- # cat 00:18:15.146 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.146 04:09:29 -- nvmf/common.sh@545 -- # jq . 00:18:15.146 04:09:29 -- nvmf/common.sh@546 -- # IFS=, 00:18:15.146 04:09:29 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme1", 00:18:15.146 "trtype": "rdma", 00:18:15.146 "traddr": "192.168.100.8", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "4420", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.146 "hdgst": false, 00:18:15.146 "ddgst": false 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 },{ 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme2", 00:18:15.146 "trtype": "rdma", 00:18:15.146 "traddr": "192.168.100.8", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "4420", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:15.146 "hdgst": false, 00:18:15.146 "ddgst": false 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 },{ 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme3", 00:18:15.146 "trtype": "rdma", 00:18:15.146 "traddr": "192.168.100.8", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "4420", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:15.146 "hdgst": false, 00:18:15.146 "ddgst": false 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 },{ 00:18:15.146 "params": { 00:18:15.146 "name": "Nvme4", 00:18:15.146 "trtype": "rdma", 00:18:15.146 "traddr": "192.168.100.8", 00:18:15.146 "adrfam": "ipv4", 00:18:15.146 "trsvcid": "4420", 00:18:15.146 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:15.146 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:15.146 "hdgst": false, 00:18:15.146 "ddgst": false 00:18:15.146 }, 00:18:15.146 "method": "bdev_nvme_attach_controller" 00:18:15.146 },{ 00:18:15.146 "params": { 00:18:15.147 "name": "Nvme5", 00:18:15.147 "trtype": "rdma", 00:18:15.147 "traddr": "192.168.100.8", 00:18:15.147 "adrfam": "ipv4", 00:18:15.147 "trsvcid": "4420", 00:18:15.147 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:15.147 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:15.147 "hdgst": false, 00:18:15.147 "ddgst": false 00:18:15.147 }, 00:18:15.147 "method": "bdev_nvme_attach_controller" 00:18:15.147 },{ 00:18:15.147 "params": { 00:18:15.147 "name": "Nvme6", 00:18:15.147 "trtype": "rdma", 00:18:15.147 "traddr": "192.168.100.8", 00:18:15.147 "adrfam": "ipv4", 00:18:15.147 "trsvcid": "4420", 00:18:15.147 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:15.147 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:15.147 "hdgst": false, 00:18:15.147 "ddgst": false 00:18:15.147 }, 00:18:15.147 "method": "bdev_nvme_attach_controller" 00:18:15.147 },{ 00:18:15.147 "params": { 00:18:15.147 "name": "Nvme7", 00:18:15.147 "trtype": "rdma", 00:18:15.147 "traddr": "192.168.100.8", 00:18:15.147 "adrfam": "ipv4", 00:18:15.147 "trsvcid": "4420", 00:18:15.147 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:15.147 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:15.147 "hdgst": false, 00:18:15.147 "ddgst": false 00:18:15.147 }, 00:18:15.147 "method": "bdev_nvme_attach_controller" 00:18:15.147 },{ 00:18:15.147 "params": { 00:18:15.147 "name": "Nvme8", 00:18:15.147 "trtype": "rdma", 00:18:15.147 "traddr": "192.168.100.8", 00:18:15.147 "adrfam": "ipv4", 00:18:15.147 "trsvcid": "4420", 00:18:15.147 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:15.147 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:15.147 "hdgst": false, 00:18:15.147 "ddgst": false 00:18:15.147 }, 00:18:15.147 "method": "bdev_nvme_attach_controller" 00:18:15.147 },{ 00:18:15.147 "params": { 00:18:15.147 "name": "Nvme9", 00:18:15.147 "trtype": "rdma", 00:18:15.147 "traddr": "192.168.100.8", 00:18:15.147 "adrfam": "ipv4", 00:18:15.147 "trsvcid": "4420", 00:18:15.147 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:15.147 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:15.147 "hdgst": false, 00:18:15.147 "ddgst": false 00:18:15.147 }, 00:18:15.147 "method": "bdev_nvme_attach_controller" 00:18:15.147 },{ 00:18:15.147 "params": { 00:18:15.147 "name": "Nvme10", 00:18:15.147 "trtype": "rdma", 00:18:15.147 "traddr": "192.168.100.8", 00:18:15.147 "adrfam": "ipv4", 00:18:15.147 "trsvcid": "4420", 00:18:15.147 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:15.147 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:15.147 "hdgst": false, 00:18:15.147 "ddgst": false 00:18:15.147 }, 00:18:15.147 "method": "bdev_nvme_attach_controller" 00:18:15.147 }' 00:18:15.147 [2024-04-19 04:09:29.547058] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.147 [2024-04-19 04:09:29.613898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.081 Running I/O for 10 seconds... 00:18:16.081 04:09:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:16.081 04:09:30 -- common/autotest_common.sh@850 -- # return 0 00:18:16.081 04:09:30 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:16.081 04:09:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.081 04:09:30 -- common/autotest_common.sh@10 -- # set +x 00:18:16.339 04:09:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.339 04:09:30 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:16.339 04:09:30 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:16.339 04:09:30 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:18:16.339 04:09:30 -- target/shutdown.sh@57 -- # local ret=1 00:18:16.339 04:09:30 -- target/shutdown.sh@58 -- # local i 00:18:16.339 04:09:30 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:18:16.339 04:09:30 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:16.339 04:09:30 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:16.339 04:09:30 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:16.339 04:09:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.339 04:09:30 -- common/autotest_common.sh@10 -- # set +x 00:18:16.339 04:09:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.339 04:09:30 -- target/shutdown.sh@60 -- # read_io_count=4 00:18:16.339 04:09:30 -- target/shutdown.sh@63 -- # '[' 4 -ge 100 ']' 00:18:16.339 04:09:30 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:16.597 04:09:30 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:16.597 04:09:30 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:16.597 04:09:30 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:16.597 04:09:30 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:16.597 04:09:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:16.597 04:09:30 -- common/autotest_common.sh@10 -- # set +x 00:18:16.597 04:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:16.597 04:09:31 -- target/shutdown.sh@60 -- # read_io_count=155 00:18:16.597 04:09:31 -- target/shutdown.sh@63 -- # '[' 155 -ge 100 ']' 00:18:16.597 04:09:31 -- target/shutdown.sh@64 -- # ret=0 00:18:16.597 04:09:31 -- target/shutdown.sh@65 -- # break 00:18:16.597 04:09:31 -- target/shutdown.sh@69 -- # return 0 00:18:16.597 04:09:31 -- target/shutdown.sh@110 -- # killprocess 339269 00:18:16.597 04:09:31 -- common/autotest_common.sh@936 -- # '[' -z 339269 ']' 00:18:16.597 04:09:31 -- common/autotest_common.sh@940 -- # kill -0 339269 00:18:16.597 04:09:31 -- common/autotest_common.sh@941 -- # uname 00:18:16.597 04:09:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:16.597 04:09:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 339269 00:18:16.856 04:09:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:16.856 04:09:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:16.856 04:09:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 339269' 00:18:16.856 killing process with pid 339269 00:18:16.856 04:09:31 -- common/autotest_common.sh@955 -- # kill 339269 00:18:16.856 04:09:31 -- common/autotest_common.sh@960 -- # wait 339269 00:18:16.856 Received shutdown signal, test time was about 0.697915 seconds 00:18:16.856 00:18:16.856 Latency(us) 00:18:16.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.856 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:16.856 Verification LBA range: start 0x0 length 0x400 00:18:16.856 Nvme1n1 : 0.69 406.54 25.41 0.00 0.00 153791.88 5339.97 215928.98 00:18:16.856 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:16.856 Verification LBA range: start 0x0 length 0x400 00:18:16.856 Nvme2n1 : 0.68 376.06 23.50 0.00 0.00 163648.85 53593.88 153014.42 00:18:16.856 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:16.856 Verification LBA range: start 0x0 length 0x400 00:18:16.856 Nvme3n1 : 0.69 407.45 25.47 0.00 0.00 146664.29 4174.89 143693.75 00:18:16.856 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:16.856 Verification LBA range: start 0x0 length 0x400 00:18:16.856 Nvme4n1 : 0.69 422.75 26.42 0.00 0.00 138489.23 3276.80 136703.24 00:18:16.856 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:16.856 Verification LBA range: start 0x0 length 0x400 00:18:16.856 Nvme5n1 : 0.69 393.31 24.58 0.00 0.00 145108.16 7233.23 126605.84 00:18:16.856 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:16.856 Verification LBA range: start 0x0 length 0x400 00:18:16.856 Nvme6n1 : 0.69 414.38 25.90 0.00 0.00 134927.28 7427.41 119615.34 00:18:16.856 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:16.856 Verification LBA range: start 0x0 length 0x400 00:18:16.856 Nvme7n1 : 0.69 418.08 26.13 0.00 0.00 130607.56 4636.07 113401.55 00:18:16.856 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:16.856 Verification LBA range: start 0x0 length 0x400 00:18:16.856 Nvme8n1 : 0.69 421.76 26.36 0.00 0.00 126380.93 5291.43 107187.77 00:18:16.856 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:16.856 Verification LBA range: start 0x0 length 0x400 00:18:16.856 Nvme9n1 : 0.70 391.07 24.44 0.00 0.00 132490.70 7815.77 106411.05 00:18:16.856 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:16.856 Verification LBA range: start 0x0 length 0x400 00:18:16.856 Nvme10n1 : 0.70 367.31 22.96 0.00 0.00 138806.99 8446.86 170102.33 00:18:16.856 =================================================================================================================== 00:18:16.856 Total : 4018.71 251.17 0.00 0.00 140798.41 3276.80 215928.98 00:18:17.115 04:09:31 -- target/shutdown.sh@113 -- # sleep 1 00:18:18.052 04:09:32 -- target/shutdown.sh@114 -- # kill -0 338959 00:18:18.052 04:09:32 -- target/shutdown.sh@116 -- # stoptarget 00:18:18.052 04:09:32 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:18.052 04:09:32 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:18.052 04:09:32 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:18.052 04:09:32 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:18.052 04:09:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:18.052 04:09:32 -- nvmf/common.sh@117 -- # sync 00:18:18.052 04:09:32 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:18.052 04:09:32 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:18.052 04:09:32 -- nvmf/common.sh@120 -- # set +e 00:18:18.052 04:09:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.052 04:09:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:18.052 rmmod nvme_rdma 00:18:18.052 rmmod nvme_fabrics 00:18:18.052 04:09:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.052 04:09:32 -- nvmf/common.sh@124 -- # set -e 00:18:18.052 04:09:32 -- nvmf/common.sh@125 -- # return 0 00:18:18.052 04:09:32 -- nvmf/common.sh@478 -- # '[' -n 338959 ']' 00:18:18.052 04:09:32 -- nvmf/common.sh@479 -- # killprocess 338959 00:18:18.052 04:09:32 -- common/autotest_common.sh@936 -- # '[' -z 338959 ']' 00:18:18.052 04:09:32 -- common/autotest_common.sh@940 -- # kill -0 338959 00:18:18.052 04:09:32 -- common/autotest_common.sh@941 -- # uname 00:18:18.052 04:09:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.052 04:09:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 338959 00:18:18.311 04:09:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:18.311 04:09:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:18.311 04:09:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 338959' 00:18:18.311 killing process with pid 338959 00:18:18.311 04:09:32 -- common/autotest_common.sh@955 -- # kill 338959 00:18:18.311 04:09:32 -- common/autotest_common.sh@960 -- # wait 338959 00:18:18.570 04:09:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:18.570 04:09:33 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:18.570 00:18:18.570 real 0m5.339s 00:18:18.570 user 0m21.428s 00:18:18.570 sys 0m1.015s 00:18:18.570 04:09:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:18.570 04:09:33 -- common/autotest_common.sh@10 -- # set +x 00:18:18.570 ************************************ 00:18:18.570 END TEST nvmf_shutdown_tc2 00:18:18.570 ************************************ 00:18:18.570 04:09:33 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:18:18.570 04:09:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:18.570 04:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:18.571 04:09:33 -- common/autotest_common.sh@10 -- # set +x 00:18:18.840 ************************************ 00:18:18.840 START TEST nvmf_shutdown_tc3 00:18:18.840 ************************************ 00:18:18.840 04:09:33 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:18:18.840 04:09:33 -- target/shutdown.sh@121 -- # starttarget 00:18:18.840 04:09:33 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:18.840 04:09:33 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:18.840 04:09:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.840 04:09:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:18.840 04:09:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:18.840 04:09:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:18.840 04:09:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.840 04:09:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.840 04:09:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.840 04:09:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:18.840 04:09:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:18.840 04:09:33 -- common/autotest_common.sh@10 -- # set +x 00:18:18.840 04:09:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:18.840 04:09:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:18.840 04:09:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:18.840 04:09:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:18.840 04:09:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:18.840 04:09:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:18.840 04:09:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:18.840 04:09:33 -- nvmf/common.sh@295 -- # net_devs=() 00:18:18.840 04:09:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:18.840 04:09:33 -- nvmf/common.sh@296 -- # e810=() 00:18:18.840 04:09:33 -- nvmf/common.sh@296 -- # local -ga e810 00:18:18.840 04:09:33 -- nvmf/common.sh@297 -- # x722=() 00:18:18.840 04:09:33 -- nvmf/common.sh@297 -- # local -ga x722 00:18:18.840 04:09:33 -- nvmf/common.sh@298 -- # mlx=() 00:18:18.840 04:09:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:18.840 04:09:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.840 04:09:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.840 04:09:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.840 04:09:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.840 04:09:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.840 04:09:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.840 04:09:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.840 04:09:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.840 04:09:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.840 04:09:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.840 04:09:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.840 04:09:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:18.840 04:09:33 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:18.840 04:09:33 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:18.840 04:09:33 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:18.840 04:09:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:18.840 04:09:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.840 04:09:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:18.840 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:18.840 04:09:33 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:18.840 04:09:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.840 04:09:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:18.840 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:18.840 04:09:33 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:18.840 04:09:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:18.840 04:09:33 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.840 04:09:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.840 04:09:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:18.840 04:09:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.840 04:09:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:18.840 Found net devices under 0000:18:00.0: mlx_0_0 00:18:18.840 04:09:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.840 04:09:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.840 04:09:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.840 04:09:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:18.840 04:09:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.840 04:09:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:18.840 Found net devices under 0000:18:00.1: mlx_0_1 00:18:18.840 04:09:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.840 04:09:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:18.840 04:09:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:18.840 04:09:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:18.840 04:09:33 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:18.840 04:09:33 -- nvmf/common.sh@58 -- # uname 00:18:18.840 04:09:33 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:18.840 04:09:33 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:18.840 04:09:33 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:18.840 04:09:33 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:18.840 04:09:33 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:18.840 04:09:33 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:18.840 04:09:33 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:18.840 04:09:33 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:18.840 04:09:33 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:18.840 04:09:33 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:18.840 04:09:33 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:18.840 04:09:33 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:18.840 04:09:33 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:18.840 04:09:33 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:18.840 04:09:33 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:18.840 04:09:33 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:18.840 04:09:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:18.840 04:09:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.840 04:09:33 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:18.840 04:09:33 -- nvmf/common.sh@105 -- # continue 2 00:18:18.840 04:09:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:18.840 04:09:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.840 04:09:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.840 04:09:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:18.840 04:09:33 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:18.840 04:09:33 -- nvmf/common.sh@105 -- # continue 2 00:18:18.840 04:09:33 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:18.841 04:09:33 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:18.841 04:09:33 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:18.841 04:09:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:18.841 04:09:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:18.841 04:09:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:18.841 04:09:33 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:18.841 04:09:33 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:18.841 04:09:33 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:18.841 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:18.841 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:18.841 altname enp24s0f0np0 00:18:18.841 altname ens785f0np0 00:18:18.841 inet 192.168.100.8/24 scope global mlx_0_0 00:18:18.841 valid_lft forever preferred_lft forever 00:18:18.841 04:09:33 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:18.841 04:09:33 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:18.841 04:09:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:18.841 04:09:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:18.841 04:09:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:18.841 04:09:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:18.841 04:09:33 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:18.841 04:09:33 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:18.841 04:09:33 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:18.841 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:18.841 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:18.841 altname enp24s0f1np1 00:18:18.841 altname ens785f1np1 00:18:18.841 inet 192.168.100.9/24 scope global mlx_0_1 00:18:18.841 valid_lft forever preferred_lft forever 00:18:18.841 04:09:33 -- nvmf/common.sh@411 -- # return 0 00:18:18.841 04:09:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:18.841 04:09:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:18.841 04:09:33 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:18.841 04:09:33 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:18.841 04:09:33 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:18.841 04:09:33 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:18.841 04:09:33 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:18.841 04:09:33 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:18.841 04:09:33 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:18.841 04:09:33 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:18.841 04:09:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:18.841 04:09:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.841 04:09:33 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:18.841 04:09:33 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:18.841 04:09:33 -- nvmf/common.sh@105 -- # continue 2 00:18:18.841 04:09:33 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:18.841 04:09:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.841 04:09:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:18.841 04:09:33 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:18.841 04:09:33 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:18.841 04:09:33 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:18.841 04:09:33 -- nvmf/common.sh@105 -- # continue 2 00:18:18.841 04:09:33 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:18.841 04:09:33 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:18.841 04:09:33 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:18.841 04:09:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:18.841 04:09:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:18.841 04:09:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:19.100 04:09:33 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:19.100 04:09:33 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:19.100 04:09:33 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:19.100 04:09:33 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:19.100 04:09:33 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:19.100 04:09:33 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:19.100 04:09:33 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:19.100 192.168.100.9' 00:18:19.100 04:09:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:19.100 192.168.100.9' 00:18:19.100 04:09:33 -- nvmf/common.sh@446 -- # head -n 1 00:18:19.100 04:09:33 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:19.100 04:09:33 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:19.100 192.168.100.9' 00:18:19.100 04:09:33 -- nvmf/common.sh@447 -- # tail -n +2 00:18:19.100 04:09:33 -- nvmf/common.sh@447 -- # head -n 1 00:18:19.100 04:09:33 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:19.100 04:09:33 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:19.100 04:09:33 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:19.100 04:09:33 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:19.100 04:09:33 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:19.100 04:09:33 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:19.100 04:09:33 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:19.100 04:09:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:19.100 04:09:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:19.100 04:09:33 -- common/autotest_common.sh@10 -- # set +x 00:18:19.100 04:09:33 -- nvmf/common.sh@470 -- # nvmfpid=339973 00:18:19.100 04:09:33 -- nvmf/common.sh@471 -- # waitforlisten 339973 00:18:19.100 04:09:33 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:19.100 04:09:33 -- common/autotest_common.sh@817 -- # '[' -z 339973 ']' 00:18:19.100 04:09:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.100 04:09:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:19.100 04:09:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.100 04:09:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:19.100 04:09:33 -- common/autotest_common.sh@10 -- # set +x 00:18:19.100 [2024-04-19 04:09:33.467871] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:19.100 [2024-04-19 04:09:33.467917] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.100 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.100 [2024-04-19 04:09:33.520331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.100 [2024-04-19 04:09:33.595897] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.100 [2024-04-19 04:09:33.595933] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.100 [2024-04-19 04:09:33.595940] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.100 [2024-04-19 04:09:33.595944] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.100 [2024-04-19 04:09:33.595949] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.100 [2024-04-19 04:09:33.596045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.100 [2024-04-19 04:09:33.596147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.100 [2024-04-19 04:09:33.596253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.100 [2024-04-19 04:09:33.596254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:20.036 04:09:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:20.036 04:09:34 -- common/autotest_common.sh@850 -- # return 0 00:18:20.036 04:09:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:20.036 04:09:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:20.036 04:09:34 -- common/autotest_common.sh@10 -- # set +x 00:18:20.036 04:09:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.036 04:09:34 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:20.036 04:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:20.036 04:09:34 -- common/autotest_common.sh@10 -- # set +x 00:18:20.036 [2024-04-19 04:09:34.313026] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x114a9b0/0x114eea0) succeed. 00:18:20.036 [2024-04-19 04:09:34.322403] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x114bfa0/0x1190530) succeed. 00:18:20.036 04:09:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:20.036 04:09:34 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:20.036 04:09:34 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:20.036 04:09:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:20.036 04:09:34 -- common/autotest_common.sh@10 -- # set +x 00:18:20.036 04:09:34 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:20.036 04:09:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.036 04:09:34 -- target/shutdown.sh@28 -- # cat 00:18:20.036 04:09:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.036 04:09:34 -- target/shutdown.sh@28 -- # cat 00:18:20.036 04:09:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.036 04:09:34 -- target/shutdown.sh@28 -- # cat 00:18:20.036 04:09:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.036 04:09:34 -- target/shutdown.sh@28 -- # cat 00:18:20.036 04:09:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.036 04:09:34 -- target/shutdown.sh@28 -- # cat 00:18:20.036 04:09:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.036 04:09:34 -- target/shutdown.sh@28 -- # cat 00:18:20.036 04:09:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.036 04:09:34 -- target/shutdown.sh@28 -- # cat 00:18:20.036 04:09:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.036 04:09:34 -- target/shutdown.sh@28 -- # cat 00:18:20.036 04:09:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.036 04:09:34 -- target/shutdown.sh@28 -- # cat 00:18:20.036 04:09:34 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:20.036 04:09:34 -- target/shutdown.sh@28 -- # cat 00:18:20.036 04:09:34 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:20.036 04:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:20.036 04:09:34 -- common/autotest_common.sh@10 -- # set +x 00:18:20.036 Malloc1 00:18:20.036 [2024-04-19 04:09:34.521153] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:20.036 Malloc2 00:18:20.295 Malloc3 00:18:20.295 Malloc4 00:18:20.295 Malloc5 00:18:20.295 Malloc6 00:18:20.295 Malloc7 00:18:20.295 Malloc8 00:18:20.554 Malloc9 00:18:20.555 Malloc10 00:18:20.555 04:09:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:20.555 04:09:34 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:20.555 04:09:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:20.555 04:09:34 -- common/autotest_common.sh@10 -- # set +x 00:18:20.555 04:09:34 -- target/shutdown.sh@125 -- # perfpid=340306 00:18:20.555 04:09:34 -- target/shutdown.sh@126 -- # waitforlisten 340306 /var/tmp/bdevperf.sock 00:18:20.555 04:09:34 -- common/autotest_common.sh@817 -- # '[' -z 340306 ']' 00:18:20.555 04:09:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.555 04:09:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.555 04:09:34 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:20.555 04:09:34 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:20.555 04:09:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.555 04:09:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.555 04:09:34 -- nvmf/common.sh@521 -- # config=() 00:18:20.555 04:09:34 -- common/autotest_common.sh@10 -- # set +x 00:18:20.555 04:09:34 -- nvmf/common.sh@521 -- # local subsystem config 00:18:20.555 04:09:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:20.555 { 00:18:20.555 "params": { 00:18:20.555 "name": "Nvme$subsystem", 00:18:20.555 "trtype": "$TEST_TRANSPORT", 00:18:20.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.555 "adrfam": "ipv4", 00:18:20.555 "trsvcid": "$NVMF_PORT", 00:18:20.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.555 "hdgst": ${hdgst:-false}, 00:18:20.555 "ddgst": ${ddgst:-false} 00:18:20.555 }, 00:18:20.555 "method": "bdev_nvme_attach_controller" 00:18:20.555 } 00:18:20.555 EOF 00:18:20.555 )") 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # cat 00:18:20.555 04:09:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:20.555 { 00:18:20.555 "params": { 00:18:20.555 "name": "Nvme$subsystem", 00:18:20.555 "trtype": "$TEST_TRANSPORT", 00:18:20.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.555 "adrfam": "ipv4", 00:18:20.555 "trsvcid": "$NVMF_PORT", 00:18:20.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.555 "hdgst": ${hdgst:-false}, 00:18:20.555 "ddgst": ${ddgst:-false} 00:18:20.555 }, 00:18:20.555 "method": "bdev_nvme_attach_controller" 00:18:20.555 } 00:18:20.555 EOF 00:18:20.555 )") 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # cat 00:18:20.555 04:09:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:20.555 { 00:18:20.555 "params": { 00:18:20.555 "name": "Nvme$subsystem", 00:18:20.555 "trtype": "$TEST_TRANSPORT", 00:18:20.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.555 "adrfam": "ipv4", 00:18:20.555 "trsvcid": "$NVMF_PORT", 00:18:20.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.555 "hdgst": ${hdgst:-false}, 00:18:20.555 "ddgst": ${ddgst:-false} 00:18:20.555 }, 00:18:20.555 "method": "bdev_nvme_attach_controller" 00:18:20.555 } 00:18:20.555 EOF 00:18:20.555 )") 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # cat 00:18:20.555 04:09:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:20.555 { 00:18:20.555 "params": { 00:18:20.555 "name": "Nvme$subsystem", 00:18:20.555 "trtype": "$TEST_TRANSPORT", 00:18:20.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.555 "adrfam": "ipv4", 00:18:20.555 "trsvcid": "$NVMF_PORT", 00:18:20.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.555 "hdgst": ${hdgst:-false}, 00:18:20.555 "ddgst": ${ddgst:-false} 00:18:20.555 }, 00:18:20.555 "method": "bdev_nvme_attach_controller" 00:18:20.555 } 00:18:20.555 EOF 00:18:20.555 )") 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # cat 00:18:20.555 04:09:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:20.555 { 00:18:20.555 "params": { 00:18:20.555 "name": "Nvme$subsystem", 00:18:20.555 "trtype": "$TEST_TRANSPORT", 00:18:20.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.555 "adrfam": "ipv4", 00:18:20.555 "trsvcid": "$NVMF_PORT", 00:18:20.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.555 "hdgst": ${hdgst:-false}, 00:18:20.555 "ddgst": ${ddgst:-false} 00:18:20.555 }, 00:18:20.555 "method": "bdev_nvme_attach_controller" 00:18:20.555 } 00:18:20.555 EOF 00:18:20.555 )") 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # cat 00:18:20.555 04:09:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:20.555 { 00:18:20.555 "params": { 00:18:20.555 "name": "Nvme$subsystem", 00:18:20.555 "trtype": "$TEST_TRANSPORT", 00:18:20.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.555 "adrfam": "ipv4", 00:18:20.555 "trsvcid": "$NVMF_PORT", 00:18:20.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.555 "hdgst": ${hdgst:-false}, 00:18:20.555 "ddgst": ${ddgst:-false} 00:18:20.555 }, 00:18:20.555 "method": "bdev_nvme_attach_controller" 00:18:20.555 } 00:18:20.555 EOF 00:18:20.555 )") 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # cat 00:18:20.555 [2024-04-19 04:09:34.985857] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:20.555 [2024-04-19 04:09:34.985903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340306 ] 00:18:20.555 04:09:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:20.555 { 00:18:20.555 "params": { 00:18:20.555 "name": "Nvme$subsystem", 00:18:20.555 "trtype": "$TEST_TRANSPORT", 00:18:20.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.555 "adrfam": "ipv4", 00:18:20.555 "trsvcid": "$NVMF_PORT", 00:18:20.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.555 "hdgst": ${hdgst:-false}, 00:18:20.555 "ddgst": ${ddgst:-false} 00:18:20.555 }, 00:18:20.555 "method": "bdev_nvme_attach_controller" 00:18:20.555 } 00:18:20.555 EOF 00:18:20.555 )") 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # cat 00:18:20.555 04:09:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:20.555 04:09:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:20.555 { 00:18:20.555 "params": { 00:18:20.555 "name": "Nvme$subsystem", 00:18:20.555 "trtype": "$TEST_TRANSPORT", 00:18:20.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "$NVMF_PORT", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.556 "hdgst": ${hdgst:-false}, 00:18:20.556 "ddgst": ${ddgst:-false} 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 } 00:18:20.556 EOF 00:18:20.556 )") 00:18:20.556 04:09:34 -- nvmf/common.sh@543 -- # cat 00:18:20.556 04:09:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:20.556 04:09:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:20.556 { 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme$subsystem", 00:18:20.556 "trtype": "$TEST_TRANSPORT", 00:18:20.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "$NVMF_PORT", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.556 "hdgst": ${hdgst:-false}, 00:18:20.556 "ddgst": ${ddgst:-false} 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 } 00:18:20.556 EOF 00:18:20.556 )") 00:18:20.556 04:09:34 -- nvmf/common.sh@543 -- # cat 00:18:20.556 04:09:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:20.556 04:09:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:20.556 { 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme$subsystem", 00:18:20.556 "trtype": "$TEST_TRANSPORT", 00:18:20.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "$NVMF_PORT", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.556 "hdgst": ${hdgst:-false}, 00:18:20.556 "ddgst": ${ddgst:-false} 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 } 00:18:20.556 EOF 00:18:20.556 )") 00:18:20.556 04:09:35 -- nvmf/common.sh@543 -- # cat 00:18:20.556 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.556 04:09:35 -- nvmf/common.sh@545 -- # jq . 00:18:20.556 04:09:35 -- nvmf/common.sh@546 -- # IFS=, 00:18:20.556 04:09:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme1", 00:18:20.556 "trtype": "rdma", 00:18:20.556 "traddr": "192.168.100.8", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "4420", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.556 "hdgst": false, 00:18:20.556 "ddgst": false 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 },{ 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme2", 00:18:20.556 "trtype": "rdma", 00:18:20.556 "traddr": "192.168.100.8", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "4420", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:20.556 "hdgst": false, 00:18:20.556 "ddgst": false 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 },{ 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme3", 00:18:20.556 "trtype": "rdma", 00:18:20.556 "traddr": "192.168.100.8", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "4420", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:20.556 "hdgst": false, 00:18:20.556 "ddgst": false 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 },{ 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme4", 00:18:20.556 "trtype": "rdma", 00:18:20.556 "traddr": "192.168.100.8", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "4420", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:20.556 "hdgst": false, 00:18:20.556 "ddgst": false 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 },{ 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme5", 00:18:20.556 "trtype": "rdma", 00:18:20.556 "traddr": "192.168.100.8", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "4420", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:20.556 "hdgst": false, 00:18:20.556 "ddgst": false 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 },{ 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme6", 00:18:20.556 "trtype": "rdma", 00:18:20.556 "traddr": "192.168.100.8", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "4420", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:20.556 "hdgst": false, 00:18:20.556 "ddgst": false 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 },{ 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme7", 00:18:20.556 "trtype": "rdma", 00:18:20.556 "traddr": "192.168.100.8", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "4420", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:20.556 "hdgst": false, 00:18:20.556 "ddgst": false 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 },{ 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme8", 00:18:20.556 "trtype": "rdma", 00:18:20.556 "traddr": "192.168.100.8", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "4420", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:20.556 "hdgst": false, 00:18:20.556 "ddgst": false 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 },{ 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme9", 00:18:20.556 "trtype": "rdma", 00:18:20.556 "traddr": "192.168.100.8", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "4420", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:20.556 "hdgst": false, 00:18:20.556 "ddgst": false 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 },{ 00:18:20.556 "params": { 00:18:20.556 "name": "Nvme10", 00:18:20.556 "trtype": "rdma", 00:18:20.556 "traddr": "192.168.100.8", 00:18:20.556 "adrfam": "ipv4", 00:18:20.556 "trsvcid": "4420", 00:18:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:20.556 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:20.556 "hdgst": false, 00:18:20.556 "ddgst": false 00:18:20.556 }, 00:18:20.556 "method": "bdev_nvme_attach_controller" 00:18:20.556 }' 00:18:20.556 [2024-04-19 04:09:35.039518] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.815 [2024-04-19 04:09:35.106697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.750 Running I/O for 10 seconds... 00:18:21.750 04:09:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.750 04:09:35 -- common/autotest_common.sh@850 -- # return 0 00:18:21.750 04:09:35 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:21.750 04:09:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.750 04:09:35 -- common/autotest_common.sh@10 -- # set +x 00:18:21.750 04:09:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.750 04:09:36 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:21.750 04:09:36 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:21.750 04:09:36 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:21.750 04:09:36 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:18:21.750 04:09:36 -- target/shutdown.sh@57 -- # local ret=1 00:18:21.750 04:09:36 -- target/shutdown.sh@58 -- # local i 00:18:21.750 04:09:36 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:18:21.750 04:09:36 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:21.750 04:09:36 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:21.750 04:09:36 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:21.750 04:09:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.750 04:09:36 -- common/autotest_common.sh@10 -- # set +x 00:18:21.750 04:09:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.750 04:09:36 -- target/shutdown.sh@60 -- # read_io_count=22 00:18:21.750 04:09:36 -- target/shutdown.sh@63 -- # '[' 22 -ge 100 ']' 00:18:21.750 04:09:36 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:22.009 04:09:36 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:22.009 04:09:36 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:22.009 04:09:36 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:22.009 04:09:36 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:22.009 04:09:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.009 04:09:36 -- common/autotest_common.sh@10 -- # set +x 00:18:22.267 04:09:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.267 04:09:36 -- target/shutdown.sh@60 -- # read_io_count=180 00:18:22.267 04:09:36 -- target/shutdown.sh@63 -- # '[' 180 -ge 100 ']' 00:18:22.267 04:09:36 -- target/shutdown.sh@64 -- # ret=0 00:18:22.267 04:09:36 -- target/shutdown.sh@65 -- # break 00:18:22.267 04:09:36 -- target/shutdown.sh@69 -- # return 0 00:18:22.267 04:09:36 -- target/shutdown.sh@135 -- # killprocess 339973 00:18:22.267 04:09:36 -- common/autotest_common.sh@936 -- # '[' -z 339973 ']' 00:18:22.267 04:09:36 -- common/autotest_common.sh@940 -- # kill -0 339973 00:18:22.267 04:09:36 -- common/autotest_common.sh@941 -- # uname 00:18:22.267 04:09:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.267 04:09:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 339973 00:18:22.267 04:09:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:22.267 04:09:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:22.267 04:09:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 339973' 00:18:22.267 killing process with pid 339973 00:18:22.267 04:09:36 -- common/autotest_common.sh@955 -- # kill 339973 00:18:22.267 04:09:36 -- common/autotest_common.sh@960 -- # wait 339973 00:18:22.835 04:09:37 -- target/shutdown.sh@136 -- # nvmfpid= 00:18:22.835 04:09:37 -- target/shutdown.sh@139 -- # sleep 1 00:18:23.409 [2024-04-19 04:09:37.722488] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256f00 was disconnected and freed. reset controller. 00:18:23.409 [2024-04-19 04:09:37.724756] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256cc0 was disconnected and freed. reset controller. 00:18:23.409 [2024-04-19 04:09:37.726988] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a80 was disconnected and freed. reset controller. 00:18:23.409 [2024-04-19 04:09:37.729340] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256840 was disconnected and freed. reset controller. 00:18:23.409 [2024-04-19 04:09:37.731536] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256600 was disconnected and freed. reset controller. 00:18:23.409 [2024-04-19 04:09:37.733687] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192563c0 was disconnected and freed. reset controller. 00:18:23.409 [2024-04-19 04:09:37.733738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.733765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.733808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.733831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.733864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.733887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.733920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.733941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.733974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.733996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.409 [2024-04-19 04:09:37.734460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183c00 00:18:23.409 [2024-04-19 04:09:37.734469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.734481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183c00 00:18:23.410 [2024-04-19 04:09:37.734492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.734504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183c00 00:18:23.410 [2024-04-19 04:09:37.734513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.734525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183c00 00:18:23.410 [2024-04-19 04:09:37.734533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.734547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183c00 00:18:23.410 [2024-04-19 04:09:37.734555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.734568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183c00 00:18:23.410 [2024-04-19 04:09:37.734575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.734587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0x183300 00:18:23.410 [2024-04-19 04:09:37.734596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e600 sqhd:c2e0 p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736606] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:18:23.410 [2024-04-19 04:09:37.736753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.736987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.736997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183200 00:18:23.410 [2024-04-19 04:09:37.737271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.410 [2024-04-19 04:09:37.737281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183600 00:18:23.411 [2024-04-19 04:09:37.737542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183800 00:18:23.411 [2024-04-19 04:09:37.737560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0b000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c690000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6b1000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecd9000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecfa000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed1b000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed3c000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011be6000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011bc5000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ba4000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b83000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a35000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a14000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129f3000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.737907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129d2000 len:0x10000 key:0x182800 00:18:23.411 [2024-04-19 04:09:37.737914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32540 cdw0:18d1c8a0 sqhd:508c p:0 m:0 dnr:0 00:18:23.411 [2024-04-19 04:09:37.740052] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:18:23.411 [2024-04-19 04:09:37.740071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183b00 00:18:23.411 [2024-04-19 04:09:37.740080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183b00 00:18:23.412 [2024-04-19 04:09:37.740661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x182f00 00:18:23.412 [2024-04-19 04:09:37.740678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x182f00 00:18:23.412 [2024-04-19 04:09:37.740696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x182f00 00:18:23.412 [2024-04-19 04:09:37.740713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.412 [2024-04-19 04:09:37.740723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.740991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.740999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x182f00 00:18:23.413 [2024-04-19 04:09:37.741213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183e00 00:18:23.413 [2024-04-19 04:09:37.741231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.413 [2024-04-19 04:09:37.741240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183600 00:18:23.413 [2024-04-19 04:09:37.741249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:411f440 sqhd:a8f0 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743547] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8069c0 was disconnected and freed. reset controller. 00:18:23.414 [2024-04-19 04:09:37.743567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010470000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010491000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b2000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104d3000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104f4000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010515000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010599000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105ba000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105db000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105fc000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001061d000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c45000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c24000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c03000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012be2000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012bc1000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ba0000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f9f000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.743988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f7e000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.743995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.744005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f5d000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.744012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.744023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f3c000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.744031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.744040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f1b000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.744048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.744058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012efa000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.744066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.744075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ed9000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.744083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.744093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012eb8000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.744101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.744110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e97000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.744119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.744128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e76000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.744136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.744146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000108d2000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.744153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.414 [2024-04-19 04:09:37.744165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000108f3000 len:0x10000 key:0x182800 00:18:23.414 [2024-04-19 04:09:37.744173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010914000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010935000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010956000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010977000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010998000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000109b9000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000109da000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000109fb000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010a1c000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012df2000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012dd1000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012db0000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131af000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001318e000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e97f000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e95e000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011721000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011700000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133bf000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.744531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001339e000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.744539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.751194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001337d000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.751221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.751235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001335c000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.751244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.751259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.751267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.751277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.751285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.751295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000132f9000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.751303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.751314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000132d8000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.751321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.751331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000132b7000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.751339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.751349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013296000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.751356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.751367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013275000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.751375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.751384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013254000 len:0x10000 key:0x182800 00:18:23.415 [2024-04-19 04:09:37.751392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:414f440 sqhd:d290 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.754799] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806780 was disconnected and freed. reset controller. 00:18:23.415 [2024-04-19 04:09:37.754887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.415 [2024-04-19 04:09:37.754902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.754912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.415 [2024-04-19 04:09:37.754920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.754928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.415 [2024-04-19 04:09:37.754936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.754947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.415 [2024-04-19 04:09:37.754955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:23.415 [2024-04-19 04:09:37.757187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:23.415 [2024-04-19 04:09:37.757224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:18:23.415 [2024-04-19 04:09:37.757244] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.415 [2024-04-19 04:09:37.757282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.415 [2024-04-19 04:09:37.757305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.757327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.757348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.757370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.757391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.757427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.757448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.759821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:23.416 [2024-04-19 04:09:37.759853] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:18:23.416 [2024-04-19 04:09:37.759873] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.416 [2024-04-19 04:09:37.759907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.759930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.759953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.759974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.759996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.760017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.760039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.760059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.762088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:23.416 [2024-04-19 04:09:37.762120] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:18:23.416 [2024-04-19 04:09:37.762147] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.416 [2024-04-19 04:09:37.762185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.762208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.762231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.762251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.762274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.762295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.762317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.762338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.764516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:23.416 [2024-04-19 04:09:37.764529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:18:23.416 [2024-04-19 04:09:37.764537] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.416 [2024-04-19 04:09:37.764550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.764560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.764569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.764577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.764585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.764593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.764602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.764610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.766733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:23.416 [2024-04-19 04:09:37.766746] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:23.416 [2024-04-19 04:09:37.766754] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.416 [2024-04-19 04:09:37.766767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.766776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.766785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.766793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.766804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.766812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.766821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.766828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.768723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:23.416 [2024-04-19 04:09:37.768736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:23.416 [2024-04-19 04:09:37.768743] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.416 [2024-04-19 04:09:37.768756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.768764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.768773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.768781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.768789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.768797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.768805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.768813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.770566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:23.416 [2024-04-19 04:09:37.770578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:18:23.416 [2024-04-19 04:09:37.770585] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.416 [2024-04-19 04:09:37.770600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.770609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.770618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.770625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.770634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.770642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.770650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.770658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.772453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:23.416 [2024-04-19 04:09:37.772465] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:18:23.416 [2024-04-19 04:09:37.772472] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.416 [2024-04-19 04:09:37.772486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.772495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.772503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.772512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.772520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.416 [2024-04-19 04:09:37.772528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.416 [2024-04-19 04:09:37.772536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.417 [2024-04-19 04:09:37.772544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.417 [2024-04-19 04:09:37.774233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:23.417 [2024-04-19 04:09:37.774246] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:18:23.417 [2024-04-19 04:09:37.774253] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.417 [2024-04-19 04:09:37.774267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.417 [2024-04-19 04:09:37.774276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.417 [2024-04-19 04:09:37.774284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.417 [2024-04-19 04:09:37.774293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.417 [2024-04-19 04:09:37.774301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.417 [2024-04-19 04:09:37.774309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.417 [2024-04-19 04:09:37.774317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:23.417 [2024-04-19 04:09:37.774325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46609 cdw0:0 sqhd:e400 p:0 m:0 dnr:0 00:18:23.417 [2024-04-19 04:09:37.792304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:23.417 [2024-04-19 04:09:37.792347] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:18:23.417 [2024-04-19 04:09:37.792368] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.417 [2024-04-19 04:09:37.800085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:23.417 [2024-04-19 04:09:37.800115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:18:23.417 [2024-04-19 04:09:37.800123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:18:23.417 [2024-04-19 04:09:37.800173] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.417 [2024-04-19 04:09:37.800184] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.417 [2024-04-19 04:09:37.800194] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.417 [2024-04-19 04:09:37.800205] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.417 [2024-04-19 04:09:37.800213] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.417 [2024-04-19 04:09:37.800221] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.417 [2024-04-19 04:09:37.800229] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:23.417 [2024-04-19 04:09:37.800304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:18:23.417 [2024-04-19 04:09:37.800312] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:18:23.417 [2024-04-19 04:09:37.800319] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:18:23.417 [2024-04-19 04:09:37.800329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:18:23.417 [2024-04-19 04:09:37.802367] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:18:23.417 task offset: 40960 on job bdev=Nvme7n1 fails 00:18:23.417 00:18:23.417 Latency(us) 00:18:23.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.417 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:23.417 Job: Nvme1n1 ended in about 1.83 seconds with error 00:18:23.417 Verification LBA range: start 0x0 length 0x400 00:18:23.417 Nvme1n1 : 1.83 150.65 9.42 35.06 0.00 342348.31 7087.60 1062557.01 00:18:23.417 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:23.417 Job: Nvme2n1 ended in about 1.83 seconds with error 00:18:23.417 Verification LBA range: start 0x0 length 0x400 00:18:23.417 Nvme2n1 : 1.83 148.94 9.31 35.04 0.00 342668.90 8543.95 1068770.80 00:18:23.417 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:23.417 Job: Nvme3n1 ended in about 1.83 seconds with error 00:18:23.417 Verification LBA range: start 0x0 length 0x400 00:18:23.417 Nvme3n1 : 1.83 156.54 9.78 35.03 0.00 326495.77 15146.10 1068770.80 00:18:23.417 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:23.417 Job: Nvme4n1 ended in about 1.83 seconds with error 00:18:23.417 Verification LBA range: start 0x0 length 0x400 00:18:23.417 Nvme4n1 : 1.83 153.19 9.57 35.02 0.00 329690.22 4296.25 1068770.80 00:18:23.417 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:23.417 Job: Nvme5n1 ended in about 1.83 seconds with error 00:18:23.417 Verification LBA range: start 0x0 length 0x400 00:18:23.417 Nvme5n1 : 1.83 143.29 8.96 35.00 0.00 345336.90 22719.15 1068770.80 00:18:23.417 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:23.417 Job: Nvme6n1 ended in about 1.83 seconds with error 00:18:23.417 Verification LBA range: start 0x0 length 0x400 00:18:23.417 Nvme6n1 : 1.83 157.45 9.84 34.99 0.00 317543.43 26408.58 1068770.80 00:18:23.417 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:23.417 Job: Nvme7n1 ended in about 1.83 seconds with error 00:18:23.417 Verification LBA range: start 0x0 length 0x400 00:18:23.417 Nvme7n1 : 1.83 149.74 9.36 34.98 0.00 323115.89 32039.82 1062557.01 00:18:23.417 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:23.417 Job: Nvme8n1 ended in about 1.83 seconds with error 00:18:23.417 Verification LBA range: start 0x0 length 0x400 00:18:23.417 Nvme8n1 : 1.83 139.85 8.74 34.96 0.00 343991.18 36505.98 1112267.28 00:18:23.417 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:23.417 Job: Nvme9n1 ended in about 1.83 seconds with error 00:18:23.417 Verification LBA range: start 0x0 length 0x400 00:18:23.417 Nvme9n1 : 1.83 139.79 8.74 34.95 0.00 341123.68 44661.57 1106053.50 00:18:23.417 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:23.417 Job: Nvme10n1 ended in about 1.83 seconds with error 00:18:23.417 Verification LBA range: start 0x0 length 0x400 00:18:23.417 Nvme10n1 : 1.83 69.87 4.37 34.93 0.00 562990.65 45244.11 1087412.15 00:18:23.417 =================================================================================================================== 00:18:23.417 Total : 1409.30 88.08 349.96 0.00 348081.76 4296.25 1112267.28 00:18:23.417 [2024-04-19 04:09:37.822051] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:23.417 [2024-04-19 04:09:37.822071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:18:23.417 [2024-04-19 04:09:37.822081] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:18:23.417 [2024-04-19 04:09:37.831023] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:23.417 [2024-04-19 04:09:37.831073] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:23.417 [2024-04-19 04:09:37.831092] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:18:23.417 [2024-04-19 04:09:37.831211] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:23.417 [2024-04-19 04:09:37.831236] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:23.417 [2024-04-19 04:09:37.831252] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5380 00:18:23.417 [2024-04-19 04:09:37.831348] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:23.417 [2024-04-19 04:09:37.831371] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:23.417 [2024-04-19 04:09:37.831387] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba540 00:18:23.417 [2024-04-19 04:09:37.834555] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:23.417 [2024-04-19 04:09:37.834596] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:23.417 [2024-04-19 04:09:37.834614] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c00 00:18:23.417 [2024-04-19 04:09:37.834716] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:23.417 [2024-04-19 04:09:37.834742] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:23.418 [2024-04-19 04:09:37.834758] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c60c0 00:18:23.418 [2024-04-19 04:09:37.834868] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:23.418 [2024-04-19 04:09:37.834878] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:23.418 [2024-04-19 04:09:37.834884] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd440 00:18:23.418 [2024-04-19 04:09:37.834949] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:23.418 [2024-04-19 04:09:37.834958] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:23.418 [2024-04-19 04:09:37.834964] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192b54c0 00:18:23.418 [2024-04-19 04:09:37.835640] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:23.418 [2024-04-19 04:09:37.835672] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:23.418 [2024-04-19 04:09:37.835688] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f280 00:18:23.418 [2024-04-19 04:09:37.835785] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:23.418 [2024-04-19 04:09:37.835809] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:23.418 [2024-04-19 04:09:37.835825] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019298cc0 00:18:23.418 [2024-04-19 04:09:37.835922] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:18:23.418 [2024-04-19 04:09:37.835947] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:18:23.418 [2024-04-19 04:09:37.835963] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c140 00:18:23.676 04:09:38 -- target/shutdown.sh@142 -- # kill -9 340306 00:18:23.676 04:09:38 -- target/shutdown.sh@144 -- # stoptarget 00:18:23.676 04:09:38 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:23.676 04:09:38 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:23.676 04:09:38 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:23.676 04:09:38 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:23.676 04:09:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:23.676 04:09:38 -- nvmf/common.sh@117 -- # sync 00:18:23.676 04:09:38 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:23.676 04:09:38 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:23.676 04:09:38 -- nvmf/common.sh@120 -- # set +e 00:18:23.676 04:09:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.676 04:09:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:23.676 rmmod nvme_rdma 00:18:23.676 rmmod nvme_fabrics 00:18:23.936 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 340306 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:18:23.936 04:09:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.936 04:09:38 -- nvmf/common.sh@124 -- # set -e 00:18:23.936 04:09:38 -- nvmf/common.sh@125 -- # return 0 00:18:23.936 04:09:38 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:23.936 04:09:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:23.936 04:09:38 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:23.936 00:18:23.936 real 0m5.020s 00:18:23.936 user 0m17.111s 00:18:23.936 sys 0m1.085s 00:18:23.936 04:09:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:23.936 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:18:23.936 ************************************ 00:18:23.936 END TEST nvmf_shutdown_tc3 00:18:23.936 ************************************ 00:18:23.936 04:09:38 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:18:23.936 00:18:23.936 real 0m23.150s 00:18:23.936 user 1m8.852s 00:18:23.936 sys 0m7.585s 00:18:23.936 04:09:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:23.936 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:18:23.936 ************************************ 00:18:23.936 END TEST nvmf_shutdown 00:18:23.936 ************************************ 00:18:23.936 04:09:38 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:18:23.936 04:09:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:23.936 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:18:23.936 04:09:38 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:18:23.936 04:09:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:23.936 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:18:23.936 04:09:38 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:18:23.936 04:09:38 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:23.936 04:09:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:23.936 04:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:23.936 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:18:23.936 ************************************ 00:18:23.936 START TEST nvmf_multicontroller 00:18:23.936 ************************************ 00:18:24.195 04:09:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:18:24.195 * Looking for test storage... 00:18:24.195 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:24.195 04:09:38 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.195 04:09:38 -- nvmf/common.sh@7 -- # uname -s 00:18:24.195 04:09:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.195 04:09:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.195 04:09:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.195 04:09:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.195 04:09:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.195 04:09:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.195 04:09:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.195 04:09:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.195 04:09:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.195 04:09:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.195 04:09:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:24.195 04:09:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:24.195 04:09:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.195 04:09:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.195 04:09:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.195 04:09:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.195 04:09:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:24.195 04:09:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.195 04:09:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.195 04:09:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.195 04:09:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.195 04:09:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.195 04:09:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.195 04:09:38 -- paths/export.sh@5 -- # export PATH 00:18:24.195 04:09:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.195 04:09:38 -- nvmf/common.sh@47 -- # : 0 00:18:24.195 04:09:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.195 04:09:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.195 04:09:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.195 04:09:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.195 04:09:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.195 04:09:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.195 04:09:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.195 04:09:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.195 04:09:38 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:24.195 04:09:38 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:24.195 04:09:38 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:18:24.195 04:09:38 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:18:24.195 04:09:38 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.195 04:09:38 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:18:24.195 04:09:38 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:24.195 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:24.195 04:09:38 -- host/multicontroller.sh@20 -- # exit 0 00:18:24.195 00:18:24.195 real 0m0.106s 00:18:24.195 user 0m0.045s 00:18:24.195 sys 0m0.068s 00:18:24.195 04:09:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:24.195 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:18:24.195 ************************************ 00:18:24.195 END TEST nvmf_multicontroller 00:18:24.195 ************************************ 00:18:24.195 04:09:38 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:24.195 04:09:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:24.196 04:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:24.196 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:18:24.454 ************************************ 00:18:24.454 START TEST nvmf_aer 00:18:24.454 ************************************ 00:18:24.454 04:09:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:18:24.454 * Looking for test storage... 00:18:24.454 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:24.454 04:09:38 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.454 04:09:38 -- nvmf/common.sh@7 -- # uname -s 00:18:24.454 04:09:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.454 04:09:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.454 04:09:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.454 04:09:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.455 04:09:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.455 04:09:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.455 04:09:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.455 04:09:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.455 04:09:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.455 04:09:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.455 04:09:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:24.455 04:09:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:24.455 04:09:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.455 04:09:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.455 04:09:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.455 04:09:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.455 04:09:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:24.455 04:09:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.455 04:09:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.455 04:09:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.455 04:09:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.455 04:09:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.455 04:09:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.455 04:09:38 -- paths/export.sh@5 -- # export PATH 00:18:24.455 04:09:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.455 04:09:38 -- nvmf/common.sh@47 -- # : 0 00:18:24.455 04:09:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.455 04:09:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.455 04:09:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.455 04:09:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.455 04:09:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.455 04:09:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.455 04:09:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.455 04:09:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.455 04:09:38 -- host/aer.sh@11 -- # nvmftestinit 00:18:24.455 04:09:38 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:24.455 04:09:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.455 04:09:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:24.455 04:09:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:24.455 04:09:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:24.455 04:09:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.455 04:09:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.455 04:09:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.455 04:09:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:24.455 04:09:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:24.455 04:09:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:24.455 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:18:29.730 04:09:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:29.730 04:09:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:29.730 04:09:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:29.730 04:09:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:29.730 04:09:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:29.730 04:09:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:29.730 04:09:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:29.730 04:09:44 -- nvmf/common.sh@295 -- # net_devs=() 00:18:29.730 04:09:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:29.730 04:09:44 -- nvmf/common.sh@296 -- # e810=() 00:18:29.730 04:09:44 -- nvmf/common.sh@296 -- # local -ga e810 00:18:29.730 04:09:44 -- nvmf/common.sh@297 -- # x722=() 00:18:29.730 04:09:44 -- nvmf/common.sh@297 -- # local -ga x722 00:18:29.730 04:09:44 -- nvmf/common.sh@298 -- # mlx=() 00:18:29.730 04:09:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:29.730 04:09:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:29.730 04:09:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:29.730 04:09:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:29.730 04:09:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:29.730 04:09:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:29.730 04:09:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:29.730 04:09:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:29.730 04:09:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:29.730 04:09:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:29.730 04:09:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:29.730 04:09:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:29.730 04:09:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:29.730 04:09:44 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:29.730 04:09:44 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:29.730 04:09:44 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:29.730 04:09:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:29.730 04:09:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:29.730 04:09:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:29.730 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:29.730 04:09:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:29.730 04:09:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:29.730 04:09:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:29.730 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:29.730 04:09:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:29.730 04:09:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:29.730 04:09:44 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:29.730 04:09:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.730 04:09:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:29.730 04:09:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.730 04:09:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:29.730 Found net devices under 0000:18:00.0: mlx_0_0 00:18:29.730 04:09:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.730 04:09:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:29.730 04:09:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.730 04:09:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:29.730 04:09:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.730 04:09:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:29.730 Found net devices under 0000:18:00.1: mlx_0_1 00:18:29.730 04:09:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.730 04:09:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:29.730 04:09:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:29.730 04:09:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:29.730 04:09:44 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:29.730 04:09:44 -- nvmf/common.sh@58 -- # uname 00:18:29.730 04:09:44 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:29.730 04:09:44 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:29.730 04:09:44 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:29.730 04:09:44 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:29.730 04:09:44 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:29.730 04:09:44 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:29.730 04:09:44 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:29.730 04:09:44 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:29.730 04:09:44 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:29.730 04:09:44 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:29.730 04:09:44 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:29.730 04:09:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:29.730 04:09:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:29.730 04:09:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:29.730 04:09:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:29.730 04:09:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:29.730 04:09:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:29.730 04:09:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:29.730 04:09:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:29.730 04:09:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:29.730 04:09:44 -- nvmf/common.sh@105 -- # continue 2 00:18:29.731 04:09:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:29.731 04:09:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:29.731 04:09:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:29.731 04:09:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:29.731 04:09:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:29.731 04:09:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:29.731 04:09:44 -- nvmf/common.sh@105 -- # continue 2 00:18:29.731 04:09:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:29.731 04:09:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:29.731 04:09:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:29.731 04:09:44 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:29.731 04:09:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:29.731 04:09:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:29.731 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:29.731 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:29.731 altname enp24s0f0np0 00:18:29.731 altname ens785f0np0 00:18:29.731 inet 192.168.100.8/24 scope global mlx_0_0 00:18:29.731 valid_lft forever preferred_lft forever 00:18:29.731 04:09:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:29.731 04:09:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:29.731 04:09:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:29.731 04:09:44 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:29.731 04:09:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:29.731 04:09:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:29.731 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:29.731 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:29.731 altname enp24s0f1np1 00:18:29.731 altname ens785f1np1 00:18:29.731 inet 192.168.100.9/24 scope global mlx_0_1 00:18:29.731 valid_lft forever preferred_lft forever 00:18:29.731 04:09:44 -- nvmf/common.sh@411 -- # return 0 00:18:29.731 04:09:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:29.731 04:09:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:29.731 04:09:44 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:29.731 04:09:44 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:29.731 04:09:44 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:29.731 04:09:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:29.731 04:09:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:29.731 04:09:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:29.731 04:09:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:29.731 04:09:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:29.731 04:09:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:29.731 04:09:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:29.731 04:09:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:29.731 04:09:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:29.731 04:09:44 -- nvmf/common.sh@105 -- # continue 2 00:18:29.731 04:09:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:29.731 04:09:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:29.731 04:09:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:29.731 04:09:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:29.731 04:09:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:29.731 04:09:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:29.731 04:09:44 -- nvmf/common.sh@105 -- # continue 2 00:18:29.731 04:09:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:29.731 04:09:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:29.731 04:09:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:29.731 04:09:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:29.731 04:09:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:29.731 04:09:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:29.731 04:09:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:29.731 04:09:44 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:29.731 192.168.100.9' 00:18:29.731 04:09:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:29.731 192.168.100.9' 00:18:29.731 04:09:44 -- nvmf/common.sh@446 -- # head -n 1 00:18:29.731 04:09:44 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:29.731 04:09:44 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:29.731 192.168.100.9' 00:18:29.731 04:09:44 -- nvmf/common.sh@447 -- # tail -n +2 00:18:29.731 04:09:44 -- nvmf/common.sh@447 -- # head -n 1 00:18:29.731 04:09:44 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:29.731 04:09:44 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:29.731 04:09:44 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:29.731 04:09:44 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:29.731 04:09:44 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:29.731 04:09:44 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:29.731 04:09:44 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:18:29.731 04:09:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:29.731 04:09:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:29.731 04:09:44 -- common/autotest_common.sh@10 -- # set +x 00:18:29.731 04:09:44 -- nvmf/common.sh@470 -- # nvmfpid=344381 00:18:29.731 04:09:44 -- nvmf/common.sh@471 -- # waitforlisten 344381 00:18:29.731 04:09:44 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:29.731 04:09:44 -- common/autotest_common.sh@817 -- # '[' -z 344381 ']' 00:18:29.731 04:09:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.731 04:09:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:29.731 04:09:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.731 04:09:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:29.731 04:09:44 -- common/autotest_common.sh@10 -- # set +x 00:18:29.989 [2024-04-19 04:09:44.260941] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:29.989 [2024-04-19 04:09:44.260987] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.989 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.989 [2024-04-19 04:09:44.313233] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:29.989 [2024-04-19 04:09:44.390480] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.989 [2024-04-19 04:09:44.390514] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.989 [2024-04-19 04:09:44.390520] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.989 [2024-04-19 04:09:44.390526] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.989 [2024-04-19 04:09:44.390530] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.989 [2024-04-19 04:09:44.390567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.989 [2024-04-19 04:09:44.390647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.989 [2024-04-19 04:09:44.390729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:29.989 [2024-04-19 04:09:44.390730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.554 04:09:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:30.554 04:09:45 -- common/autotest_common.sh@850 -- # return 0 00:18:30.554 04:09:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:30.554 04:09:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:30.554 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:30.812 04:09:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.812 04:09:45 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:30.812 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.812 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:30.812 [2024-04-19 04:09:45.110304] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb9c6c0/0xba0bb0) succeed. 00:18:30.812 [2024-04-19 04:09:45.119515] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb9dcb0/0xbe2240) succeed. 00:18:30.812 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.812 04:09:45 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:18:30.812 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.812 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:30.812 Malloc0 00:18:30.812 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.812 04:09:45 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:18:30.812 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.812 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:30.812 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.812 04:09:45 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:30.812 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.812 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:30.812 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.812 04:09:45 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:30.812 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.812 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:30.812 [2024-04-19 04:09:45.273492] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:30.812 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.812 04:09:45 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:18:30.812 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.812 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:30.812 [2024-04-19 04:09:45.281137] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:18:30.812 [ 00:18:30.812 { 00:18:30.812 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:30.812 "subtype": "Discovery", 00:18:30.812 "listen_addresses": [], 00:18:30.812 "allow_any_host": true, 00:18:30.812 "hosts": [] 00:18:30.812 }, 00:18:30.812 { 00:18:30.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.812 "subtype": "NVMe", 00:18:30.812 "listen_addresses": [ 00:18:30.812 { 00:18:30.812 "transport": "RDMA", 00:18:30.812 "trtype": "RDMA", 00:18:30.812 "adrfam": "IPv4", 00:18:30.812 "traddr": "192.168.100.8", 00:18:30.812 "trsvcid": "4420" 00:18:30.812 } 00:18:30.812 ], 00:18:30.812 "allow_any_host": true, 00:18:30.812 "hosts": [], 00:18:30.812 "serial_number": "SPDK00000000000001", 00:18:30.812 "model_number": "SPDK bdev Controller", 00:18:30.812 "max_namespaces": 2, 00:18:30.812 "min_cntlid": 1, 00:18:30.812 "max_cntlid": 65519, 00:18:30.812 "namespaces": [ 00:18:30.812 { 00:18:30.812 "nsid": 1, 00:18:30.812 "bdev_name": "Malloc0", 00:18:30.812 "name": "Malloc0", 00:18:30.812 "nguid": "E1071E4A3DA94178BCDDFD18BA12CF1C", 00:18:30.812 "uuid": "e1071e4a-3da9-4178-bcdd-fd18ba12cf1c" 00:18:30.812 } 00:18:30.812 ] 00:18:30.812 } 00:18:30.812 ] 00:18:30.812 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.812 04:09:45 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:30.812 04:09:45 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:18:30.812 04:09:45 -- host/aer.sh@33 -- # aerpid=344443 00:18:30.812 04:09:45 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:18:30.812 04:09:45 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:18:30.812 04:09:45 -- common/autotest_common.sh@1251 -- # local i=0 00:18:30.812 04:09:45 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:30.812 04:09:45 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:18:30.812 04:09:45 -- common/autotest_common.sh@1254 -- # i=1 00:18:30.812 04:09:45 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:18:31.071 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.071 04:09:45 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.071 04:09:45 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:18:31.071 04:09:45 -- common/autotest_common.sh@1254 -- # i=2 00:18:31.071 04:09:45 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:18:31.071 04:09:45 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.071 04:09:45 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.071 04:09:45 -- common/autotest_common.sh@1262 -- # return 0 00:18:31.071 04:09:45 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:18:31.071 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.071 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:31.071 Malloc1 00:18:31.071 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.071 04:09:45 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:18:31.071 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.071 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:31.071 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.071 04:09:45 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:18:31.071 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.071 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:31.071 [ 00:18:31.071 { 00:18:31.071 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:31.071 "subtype": "Discovery", 00:18:31.071 "listen_addresses": [], 00:18:31.071 "allow_any_host": true, 00:18:31.071 "hosts": [] 00:18:31.071 }, 00:18:31.071 { 00:18:31.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.071 "subtype": "NVMe", 00:18:31.071 "listen_addresses": [ 00:18:31.071 { 00:18:31.071 "transport": "RDMA", 00:18:31.071 "trtype": "RDMA", 00:18:31.071 "adrfam": "IPv4", 00:18:31.071 "traddr": "192.168.100.8", 00:18:31.071 "trsvcid": "4420" 00:18:31.071 } 00:18:31.071 ], 00:18:31.071 "allow_any_host": true, 00:18:31.071 "hosts": [], 00:18:31.071 "serial_number": "SPDK00000000000001", 00:18:31.071 "model_number": "SPDK bdev Controller", 00:18:31.071 "max_namespaces": 2, 00:18:31.071 "min_cntlid": 1, 00:18:31.071 "max_cntlid": 65519, 00:18:31.071 "namespaces": [ 00:18:31.071 { 00:18:31.071 "nsid": 1, 00:18:31.071 "bdev_name": "Malloc0", 00:18:31.071 "name": "Malloc0", 00:18:31.071 "nguid": "E1071E4A3DA94178BCDDFD18BA12CF1C", 00:18:31.071 "uuid": "e1071e4a-3da9-4178-bcdd-fd18ba12cf1c" 00:18:31.071 }, 00:18:31.071 { 00:18:31.071 "nsid": 2, 00:18:31.071 "bdev_name": "Malloc1", 00:18:31.071 "name": "Malloc1", 00:18:31.071 "nguid": "DD0A3482E5DA427EBC54FB18953739B0", 00:18:31.071 "uuid": "dd0a3482-e5da-427e-bc54-fb18953739b0" 00:18:31.071 } 00:18:31.071 ] 00:18:31.071 } 00:18:31.071 ] 00:18:31.071 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.071 04:09:45 -- host/aer.sh@43 -- # wait 344443 00:18:31.071 Asynchronous Event Request test 00:18:31.071 Attaching to 192.168.100.8 00:18:31.071 Attached to 192.168.100.8 00:18:31.071 Registering asynchronous event callbacks... 00:18:31.071 Starting namespace attribute notice tests for all controllers... 00:18:31.071 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:31.071 aer_cb - Changed Namespace 00:18:31.071 Cleaning up... 00:18:31.071 04:09:45 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:31.071 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.071 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:31.329 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.329 04:09:45 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:31.329 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.329 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:31.329 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.329 04:09:45 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:31.329 04:09:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.329 04:09:45 -- common/autotest_common.sh@10 -- # set +x 00:18:31.329 04:09:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.329 04:09:45 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:18:31.329 04:09:45 -- host/aer.sh@51 -- # nvmftestfini 00:18:31.329 04:09:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:31.329 04:09:45 -- nvmf/common.sh@117 -- # sync 00:18:31.329 04:09:45 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:31.329 04:09:45 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:31.329 04:09:45 -- nvmf/common.sh@120 -- # set +e 00:18:31.329 04:09:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.329 04:09:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:31.329 rmmod nvme_rdma 00:18:31.329 rmmod nvme_fabrics 00:18:31.329 04:09:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.329 04:09:45 -- nvmf/common.sh@124 -- # set -e 00:18:31.329 04:09:45 -- nvmf/common.sh@125 -- # return 0 00:18:31.329 04:09:45 -- nvmf/common.sh@478 -- # '[' -n 344381 ']' 00:18:31.329 04:09:45 -- nvmf/common.sh@479 -- # killprocess 344381 00:18:31.329 04:09:45 -- common/autotest_common.sh@936 -- # '[' -z 344381 ']' 00:18:31.329 04:09:45 -- common/autotest_common.sh@940 -- # kill -0 344381 00:18:31.329 04:09:45 -- common/autotest_common.sh@941 -- # uname 00:18:31.329 04:09:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.329 04:09:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 344381 00:18:31.329 04:09:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:31.329 04:09:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:31.329 04:09:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 344381' 00:18:31.329 killing process with pid 344381 00:18:31.329 04:09:45 -- common/autotest_common.sh@955 -- # kill 344381 00:18:31.329 [2024-04-19 04:09:45.767788] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:18:31.329 04:09:45 -- common/autotest_common.sh@960 -- # wait 344381 00:18:31.589 04:09:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:31.589 04:09:46 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:31.589 00:18:31.589 real 0m7.300s 00:18:31.589 user 0m8.060s 00:18:31.589 sys 0m4.406s 00:18:31.589 04:09:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:31.589 04:09:46 -- common/autotest_common.sh@10 -- # set +x 00:18:31.589 ************************************ 00:18:31.589 END TEST nvmf_aer 00:18:31.589 ************************************ 00:18:31.589 04:09:46 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:18:31.589 04:09:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:31.589 04:09:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:31.589 04:09:46 -- common/autotest_common.sh@10 -- # set +x 00:18:31.848 ************************************ 00:18:31.848 START TEST nvmf_async_init 00:18:31.848 ************************************ 00:18:31.848 04:09:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:18:31.848 * Looking for test storage... 00:18:31.848 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:31.848 04:09:46 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.848 04:09:46 -- nvmf/common.sh@7 -- # uname -s 00:18:31.848 04:09:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.848 04:09:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.848 04:09:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.848 04:09:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.848 04:09:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.848 04:09:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.848 04:09:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.848 04:09:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.848 04:09:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.848 04:09:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.848 04:09:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:31.848 04:09:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:31.848 04:09:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.848 04:09:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.848 04:09:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.848 04:09:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.848 04:09:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:31.848 04:09:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.848 04:09:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.848 04:09:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.848 04:09:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.848 04:09:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.848 04:09:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.848 04:09:46 -- paths/export.sh@5 -- # export PATH 00:18:31.848 04:09:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.848 04:09:46 -- nvmf/common.sh@47 -- # : 0 00:18:31.848 04:09:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.848 04:09:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.848 04:09:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.848 04:09:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.848 04:09:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.848 04:09:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.848 04:09:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.848 04:09:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.849 04:09:46 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:18:31.849 04:09:46 -- host/async_init.sh@14 -- # null_block_size=512 00:18:31.849 04:09:46 -- host/async_init.sh@15 -- # null_bdev=null0 00:18:31.849 04:09:46 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:18:31.849 04:09:46 -- host/async_init.sh@20 -- # uuidgen 00:18:31.849 04:09:46 -- host/async_init.sh@20 -- # tr -d - 00:18:31.849 04:09:46 -- host/async_init.sh@20 -- # nguid=c87a457b2459468e9d7aa5309b6cb5fe 00:18:31.849 04:09:46 -- host/async_init.sh@22 -- # nvmftestinit 00:18:31.849 04:09:46 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:31.849 04:09:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.849 04:09:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:31.849 04:09:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:31.849 04:09:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:31.849 04:09:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.849 04:09:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.849 04:09:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.849 04:09:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:31.849 04:09:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:31.849 04:09:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.849 04:09:46 -- common/autotest_common.sh@10 -- # set +x 00:18:37.117 04:09:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:37.117 04:09:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:37.117 04:09:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:37.117 04:09:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:37.117 04:09:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:37.117 04:09:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:37.117 04:09:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:37.117 04:09:51 -- nvmf/common.sh@295 -- # net_devs=() 00:18:37.117 04:09:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:37.117 04:09:51 -- nvmf/common.sh@296 -- # e810=() 00:18:37.117 04:09:51 -- nvmf/common.sh@296 -- # local -ga e810 00:18:37.117 04:09:51 -- nvmf/common.sh@297 -- # x722=() 00:18:37.117 04:09:51 -- nvmf/common.sh@297 -- # local -ga x722 00:18:37.117 04:09:51 -- nvmf/common.sh@298 -- # mlx=() 00:18:37.117 04:09:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:37.117 04:09:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.117 04:09:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.117 04:09:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.117 04:09:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.117 04:09:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.117 04:09:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.117 04:09:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.117 04:09:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.117 04:09:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.117 04:09:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.117 04:09:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.117 04:09:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:37.117 04:09:51 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:37.117 04:09:51 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:37.117 04:09:51 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:37.117 04:09:51 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:37.117 04:09:51 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:37.117 04:09:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:37.117 04:09:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.117 04:09:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:37.117 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:37.117 04:09:51 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:37.117 04:09:51 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:37.117 04:09:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:37.117 04:09:51 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:37.117 04:09:51 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:37.117 04:09:51 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:37.117 04:09:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.117 04:09:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:37.117 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:37.117 04:09:51 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:37.117 04:09:51 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:37.117 04:09:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:37.117 04:09:51 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:37.117 04:09:51 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:37.118 04:09:51 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:37.118 04:09:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:37.118 04:09:51 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:37.118 04:09:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.118 04:09:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.118 04:09:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:37.118 04:09:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.118 04:09:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:37.118 Found net devices under 0000:18:00.0: mlx_0_0 00:18:37.118 04:09:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.118 04:09:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.118 04:09:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.118 04:09:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:37.118 04:09:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.118 04:09:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:37.118 Found net devices under 0000:18:00.1: mlx_0_1 00:18:37.118 04:09:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.118 04:09:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:37.118 04:09:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:37.118 04:09:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:37.118 04:09:51 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:37.118 04:09:51 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:37.118 04:09:51 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:37.118 04:09:51 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:37.118 04:09:51 -- nvmf/common.sh@58 -- # uname 00:18:37.118 04:09:51 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:37.118 04:09:51 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:37.118 04:09:51 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:37.118 04:09:51 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:37.118 04:09:51 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:37.118 04:09:51 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:37.118 04:09:51 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:37.118 04:09:51 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:37.118 04:09:51 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:37.118 04:09:51 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:37.118 04:09:51 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:37.118 04:09:51 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:37.118 04:09:51 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:37.118 04:09:51 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:37.118 04:09:51 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:37.377 04:09:51 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:37.377 04:09:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:37.377 04:09:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.377 04:09:51 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:37.377 04:09:51 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:37.377 04:09:51 -- nvmf/common.sh@105 -- # continue 2 00:18:37.377 04:09:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:37.377 04:09:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.377 04:09:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:37.377 04:09:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.377 04:09:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:37.377 04:09:51 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:37.377 04:09:51 -- nvmf/common.sh@105 -- # continue 2 00:18:37.377 04:09:51 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:37.377 04:09:51 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:37.377 04:09:51 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:37.377 04:09:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:37.377 04:09:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:37.377 04:09:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:37.377 04:09:51 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:37.377 04:09:51 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:37.377 04:09:51 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:37.377 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:37.377 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:37.377 altname enp24s0f0np0 00:18:37.377 altname ens785f0np0 00:18:37.377 inet 192.168.100.8/24 scope global mlx_0_0 00:18:37.377 valid_lft forever preferred_lft forever 00:18:37.377 04:09:51 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:37.377 04:09:51 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:37.377 04:09:51 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:37.377 04:09:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:37.377 04:09:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:37.377 04:09:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:37.377 04:09:51 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:37.377 04:09:51 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:37.377 04:09:51 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:37.377 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:37.377 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:37.377 altname enp24s0f1np1 00:18:37.377 altname ens785f1np1 00:18:37.377 inet 192.168.100.9/24 scope global mlx_0_1 00:18:37.377 valid_lft forever preferred_lft forever 00:18:37.377 04:09:51 -- nvmf/common.sh@411 -- # return 0 00:18:37.377 04:09:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:37.377 04:09:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:37.377 04:09:51 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:37.377 04:09:51 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:37.377 04:09:51 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:37.377 04:09:51 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:37.377 04:09:51 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:37.378 04:09:51 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:37.378 04:09:51 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:37.378 04:09:51 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:37.378 04:09:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:37.378 04:09:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.378 04:09:51 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:37.378 04:09:51 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:37.378 04:09:51 -- nvmf/common.sh@105 -- # continue 2 00:18:37.378 04:09:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:37.378 04:09:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.378 04:09:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:37.378 04:09:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:37.378 04:09:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:37.378 04:09:51 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:37.378 04:09:51 -- nvmf/common.sh@105 -- # continue 2 00:18:37.378 04:09:51 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:37.378 04:09:51 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:37.378 04:09:51 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:37.378 04:09:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:37.378 04:09:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:37.378 04:09:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:37.378 04:09:51 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:37.378 04:09:51 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:37.378 04:09:51 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:37.378 04:09:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:37.378 04:09:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:37.378 04:09:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:37.378 04:09:51 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:37.378 192.168.100.9' 00:18:37.378 04:09:51 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:37.378 192.168.100.9' 00:18:37.378 04:09:51 -- nvmf/common.sh@446 -- # head -n 1 00:18:37.378 04:09:51 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:37.378 04:09:51 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:37.378 192.168.100.9' 00:18:37.378 04:09:51 -- nvmf/common.sh@447 -- # tail -n +2 00:18:37.378 04:09:51 -- nvmf/common.sh@447 -- # head -n 1 00:18:37.378 04:09:51 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:37.378 04:09:51 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:37.378 04:09:51 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:37.378 04:09:51 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:37.378 04:09:51 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:37.378 04:09:51 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:37.378 04:09:51 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:18:37.378 04:09:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:37.378 04:09:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:37.378 04:09:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.378 04:09:51 -- nvmf/common.sh@470 -- # nvmfpid=347928 00:18:37.378 04:09:51 -- nvmf/common.sh@471 -- # waitforlisten 347928 00:18:37.378 04:09:51 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:37.378 04:09:51 -- common/autotest_common.sh@817 -- # '[' -z 347928 ']' 00:18:37.378 04:09:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.378 04:09:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:37.378 04:09:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.378 04:09:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:37.378 04:09:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.378 [2024-04-19 04:09:51.832985] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:37.378 [2024-04-19 04:09:51.833035] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.378 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.378 [2024-04-19 04:09:51.884554] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.637 [2024-04-19 04:09:51.956006] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.637 [2024-04-19 04:09:51.956046] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.637 [2024-04-19 04:09:51.956052] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.637 [2024-04-19 04:09:51.956057] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.637 [2024-04-19 04:09:51.956061] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.637 [2024-04-19 04:09:51.956077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.204 04:09:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:38.204 04:09:52 -- common/autotest_common.sh@850 -- # return 0 00:18:38.204 04:09:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:38.204 04:09:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:38.204 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.204 04:09:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.204 04:09:52 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:18:38.204 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.204 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.204 [2024-04-19 04:09:52.659110] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb42520/0xb46a10) succeed. 00:18:38.204 [2024-04-19 04:09:52.666860] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb43a20/0xb880a0) succeed. 00:18:38.204 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.204 04:09:52 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:18:38.204 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.204 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.204 null0 00:18:38.204 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.204 04:09:52 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:18:38.204 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.204 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.204 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.204 04:09:52 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:18:38.204 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.204 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.204 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.204 04:09:52 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c87a457b2459468e9d7aa5309b6cb5fe 00:18:38.204 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.204 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.463 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.463 04:09:52 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:38.463 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.463 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.463 [2024-04-19 04:09:52.744747] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:38.463 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.463 04:09:52 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:18:38.463 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.463 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.463 nvme0n1 00:18:38.463 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.463 04:09:52 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:38.463 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.463 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.463 [ 00:18:38.463 { 00:18:38.463 "name": "nvme0n1", 00:18:38.463 "aliases": [ 00:18:38.463 "c87a457b-2459-468e-9d7a-a5309b6cb5fe" 00:18:38.463 ], 00:18:38.463 "product_name": "NVMe disk", 00:18:38.463 "block_size": 512, 00:18:38.463 "num_blocks": 2097152, 00:18:38.463 "uuid": "c87a457b-2459-468e-9d7a-a5309b6cb5fe", 00:18:38.463 "assigned_rate_limits": { 00:18:38.463 "rw_ios_per_sec": 0, 00:18:38.463 "rw_mbytes_per_sec": 0, 00:18:38.463 "r_mbytes_per_sec": 0, 00:18:38.463 "w_mbytes_per_sec": 0 00:18:38.463 }, 00:18:38.463 "claimed": false, 00:18:38.463 "zoned": false, 00:18:38.463 "supported_io_types": { 00:18:38.463 "read": true, 00:18:38.463 "write": true, 00:18:38.463 "unmap": false, 00:18:38.463 "write_zeroes": true, 00:18:38.463 "flush": true, 00:18:38.463 "reset": true, 00:18:38.463 "compare": true, 00:18:38.463 "compare_and_write": true, 00:18:38.463 "abort": true, 00:18:38.463 "nvme_admin": true, 00:18:38.463 "nvme_io": true 00:18:38.463 }, 00:18:38.463 "memory_domains": [ 00:18:38.463 { 00:18:38.463 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:38.463 "dma_device_type": 0 00:18:38.463 } 00:18:38.463 ], 00:18:38.463 "driver_specific": { 00:18:38.463 "nvme": [ 00:18:38.463 { 00:18:38.463 "trid": { 00:18:38.463 "trtype": "RDMA", 00:18:38.463 "adrfam": "IPv4", 00:18:38.463 "traddr": "192.168.100.8", 00:18:38.463 "trsvcid": "4420", 00:18:38.463 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:38.463 }, 00:18:38.463 "ctrlr_data": { 00:18:38.463 "cntlid": 1, 00:18:38.463 "vendor_id": "0x8086", 00:18:38.463 "model_number": "SPDK bdev Controller", 00:18:38.463 "serial_number": "00000000000000000000", 00:18:38.463 "firmware_revision": "24.05", 00:18:38.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:38.463 "oacs": { 00:18:38.463 "security": 0, 00:18:38.463 "format": 0, 00:18:38.463 "firmware": 0, 00:18:38.463 "ns_manage": 0 00:18:38.463 }, 00:18:38.463 "multi_ctrlr": true, 00:18:38.463 "ana_reporting": false 00:18:38.463 }, 00:18:38.463 "vs": { 00:18:38.463 "nvme_version": "1.3" 00:18:38.463 }, 00:18:38.463 "ns_data": { 00:18:38.463 "id": 1, 00:18:38.463 "can_share": true 00:18:38.463 } 00:18:38.463 } 00:18:38.463 ], 00:18:38.463 "mp_policy": "active_passive" 00:18:38.463 } 00:18:38.463 } 00:18:38.463 ] 00:18:38.463 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.463 04:09:52 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:18:38.463 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.463 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.463 [2024-04-19 04:09:52.836485] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:38.463 [2024-04-19 04:09:52.857898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:38.463 [2024-04-19 04:09:52.878367] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.463 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.463 04:09:52 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:38.463 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.463 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.463 [ 00:18:38.463 { 00:18:38.463 "name": "nvme0n1", 00:18:38.463 "aliases": [ 00:18:38.463 "c87a457b-2459-468e-9d7a-a5309b6cb5fe" 00:18:38.463 ], 00:18:38.463 "product_name": "NVMe disk", 00:18:38.463 "block_size": 512, 00:18:38.463 "num_blocks": 2097152, 00:18:38.463 "uuid": "c87a457b-2459-468e-9d7a-a5309b6cb5fe", 00:18:38.463 "assigned_rate_limits": { 00:18:38.463 "rw_ios_per_sec": 0, 00:18:38.463 "rw_mbytes_per_sec": 0, 00:18:38.463 "r_mbytes_per_sec": 0, 00:18:38.463 "w_mbytes_per_sec": 0 00:18:38.463 }, 00:18:38.463 "claimed": false, 00:18:38.463 "zoned": false, 00:18:38.463 "supported_io_types": { 00:18:38.463 "read": true, 00:18:38.463 "write": true, 00:18:38.463 "unmap": false, 00:18:38.463 "write_zeroes": true, 00:18:38.463 "flush": true, 00:18:38.463 "reset": true, 00:18:38.463 "compare": true, 00:18:38.463 "compare_and_write": true, 00:18:38.463 "abort": true, 00:18:38.463 "nvme_admin": true, 00:18:38.463 "nvme_io": true 00:18:38.463 }, 00:18:38.463 "memory_domains": [ 00:18:38.463 { 00:18:38.463 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:38.463 "dma_device_type": 0 00:18:38.463 } 00:18:38.463 ], 00:18:38.463 "driver_specific": { 00:18:38.463 "nvme": [ 00:18:38.463 { 00:18:38.463 "trid": { 00:18:38.463 "trtype": "RDMA", 00:18:38.463 "adrfam": "IPv4", 00:18:38.463 "traddr": "192.168.100.8", 00:18:38.463 "trsvcid": "4420", 00:18:38.463 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:38.463 }, 00:18:38.463 "ctrlr_data": { 00:18:38.463 "cntlid": 2, 00:18:38.463 "vendor_id": "0x8086", 00:18:38.463 "model_number": "SPDK bdev Controller", 00:18:38.463 "serial_number": "00000000000000000000", 00:18:38.463 "firmware_revision": "24.05", 00:18:38.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:38.463 "oacs": { 00:18:38.463 "security": 0, 00:18:38.463 "format": 0, 00:18:38.463 "firmware": 0, 00:18:38.463 "ns_manage": 0 00:18:38.463 }, 00:18:38.463 "multi_ctrlr": true, 00:18:38.463 "ana_reporting": false 00:18:38.463 }, 00:18:38.463 "vs": { 00:18:38.463 "nvme_version": "1.3" 00:18:38.463 }, 00:18:38.463 "ns_data": { 00:18:38.463 "id": 1, 00:18:38.463 "can_share": true 00:18:38.463 } 00:18:38.463 } 00:18:38.463 ], 00:18:38.463 "mp_policy": "active_passive" 00:18:38.463 } 00:18:38.463 } 00:18:38.463 ] 00:18:38.463 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.463 04:09:52 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.463 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.463 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.463 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.463 04:09:52 -- host/async_init.sh@53 -- # mktemp 00:18:38.463 04:09:52 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.3IjYhLonRK 00:18:38.463 04:09:52 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:38.463 04:09:52 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.3IjYhLonRK 00:18:38.463 04:09:52 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:18:38.463 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.463 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.463 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.463 04:09:52 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:18:38.463 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.463 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.463 [2024-04-19 04:09:52.932495] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:38.463 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.464 04:09:52 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3IjYhLonRK 00:18:38.464 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.464 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.464 04:09:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.464 04:09:52 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3IjYhLonRK 00:18:38.464 04:09:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.464 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:18:38.464 [2024-04-19 04:09:52.948523] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.722 nvme0n1 00:18:38.722 04:09:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.722 04:09:53 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:18:38.722 04:09:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.722 04:09:53 -- common/autotest_common.sh@10 -- # set +x 00:18:38.722 [ 00:18:38.722 { 00:18:38.722 "name": "nvme0n1", 00:18:38.722 "aliases": [ 00:18:38.722 "c87a457b-2459-468e-9d7a-a5309b6cb5fe" 00:18:38.722 ], 00:18:38.722 "product_name": "NVMe disk", 00:18:38.722 "block_size": 512, 00:18:38.722 "num_blocks": 2097152, 00:18:38.722 "uuid": "c87a457b-2459-468e-9d7a-a5309b6cb5fe", 00:18:38.722 "assigned_rate_limits": { 00:18:38.722 "rw_ios_per_sec": 0, 00:18:38.722 "rw_mbytes_per_sec": 0, 00:18:38.722 "r_mbytes_per_sec": 0, 00:18:38.722 "w_mbytes_per_sec": 0 00:18:38.722 }, 00:18:38.722 "claimed": false, 00:18:38.722 "zoned": false, 00:18:38.722 "supported_io_types": { 00:18:38.722 "read": true, 00:18:38.722 "write": true, 00:18:38.722 "unmap": false, 00:18:38.722 "write_zeroes": true, 00:18:38.722 "flush": true, 00:18:38.722 "reset": true, 00:18:38.722 "compare": true, 00:18:38.722 "compare_and_write": true, 00:18:38.722 "abort": true, 00:18:38.722 "nvme_admin": true, 00:18:38.722 "nvme_io": true 00:18:38.722 }, 00:18:38.722 "memory_domains": [ 00:18:38.722 { 00:18:38.722 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:38.722 "dma_device_type": 0 00:18:38.722 } 00:18:38.722 ], 00:18:38.722 "driver_specific": { 00:18:38.722 "nvme": [ 00:18:38.722 { 00:18:38.722 "trid": { 00:18:38.722 "trtype": "RDMA", 00:18:38.722 "adrfam": "IPv4", 00:18:38.722 "traddr": "192.168.100.8", 00:18:38.722 "trsvcid": "4421", 00:18:38.722 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:38.722 }, 00:18:38.722 "ctrlr_data": { 00:18:38.722 "cntlid": 3, 00:18:38.722 "vendor_id": "0x8086", 00:18:38.723 "model_number": "SPDK bdev Controller", 00:18:38.723 "serial_number": "00000000000000000000", 00:18:38.723 "firmware_revision": "24.05", 00:18:38.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:38.723 "oacs": { 00:18:38.723 "security": 0, 00:18:38.723 "format": 0, 00:18:38.723 "firmware": 0, 00:18:38.723 "ns_manage": 0 00:18:38.723 }, 00:18:38.723 "multi_ctrlr": true, 00:18:38.723 "ana_reporting": false 00:18:38.723 }, 00:18:38.723 "vs": { 00:18:38.723 "nvme_version": "1.3" 00:18:38.723 }, 00:18:38.723 "ns_data": { 00:18:38.723 "id": 1, 00:18:38.723 "can_share": true 00:18:38.723 } 00:18:38.723 } 00:18:38.723 ], 00:18:38.723 "mp_policy": "active_passive" 00:18:38.723 } 00:18:38.723 } 00:18:38.723 ] 00:18:38.723 04:09:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.723 04:09:53 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.723 04:09:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.723 04:09:53 -- common/autotest_common.sh@10 -- # set +x 00:18:38.723 04:09:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.723 04:09:53 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.3IjYhLonRK 00:18:38.723 04:09:53 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:18:38.723 04:09:53 -- host/async_init.sh@78 -- # nvmftestfini 00:18:38.723 04:09:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:38.723 04:09:53 -- nvmf/common.sh@117 -- # sync 00:18:38.723 04:09:53 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:38.723 04:09:53 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:38.723 04:09:53 -- nvmf/common.sh@120 -- # set +e 00:18:38.723 04:09:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:38.723 04:09:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:38.723 rmmod nvme_rdma 00:18:38.723 rmmod nvme_fabrics 00:18:38.723 04:09:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.723 04:09:53 -- nvmf/common.sh@124 -- # set -e 00:18:38.723 04:09:53 -- nvmf/common.sh@125 -- # return 0 00:18:38.723 04:09:53 -- nvmf/common.sh@478 -- # '[' -n 347928 ']' 00:18:38.723 04:09:53 -- nvmf/common.sh@479 -- # killprocess 347928 00:18:38.723 04:09:53 -- common/autotest_common.sh@936 -- # '[' -z 347928 ']' 00:18:38.723 04:09:53 -- common/autotest_common.sh@940 -- # kill -0 347928 00:18:38.723 04:09:53 -- common/autotest_common.sh@941 -- # uname 00:18:38.723 04:09:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:38.723 04:09:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 347928 00:18:38.723 04:09:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:38.723 04:09:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:38.723 04:09:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 347928' 00:18:38.723 killing process with pid 347928 00:18:38.723 04:09:53 -- common/autotest_common.sh@955 -- # kill 347928 00:18:38.723 04:09:53 -- common/autotest_common.sh@960 -- # wait 347928 00:18:38.982 04:09:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:38.982 04:09:53 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:38.982 00:18:38.982 real 0m7.184s 00:18:38.982 user 0m3.254s 00:18:38.982 sys 0m4.462s 00:18:38.982 04:09:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:38.982 04:09:53 -- common/autotest_common.sh@10 -- # set +x 00:18:38.982 ************************************ 00:18:38.982 END TEST nvmf_async_init 00:18:38.982 ************************************ 00:18:38.982 04:09:53 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:18:38.982 04:09:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:38.982 04:09:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:38.982 04:09:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.242 ************************************ 00:18:39.242 START TEST dma 00:18:39.242 ************************************ 00:18:39.242 04:09:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:18:39.242 * Looking for test storage... 00:18:39.242 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:39.242 04:09:53 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.242 04:09:53 -- nvmf/common.sh@7 -- # uname -s 00:18:39.242 04:09:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.242 04:09:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.242 04:09:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.242 04:09:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.242 04:09:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.242 04:09:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.242 04:09:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.242 04:09:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.242 04:09:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.242 04:09:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.242 04:09:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:39.242 04:09:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:39.242 04:09:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.242 04:09:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.242 04:09:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.242 04:09:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.242 04:09:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:39.242 04:09:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.242 04:09:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.242 04:09:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.242 04:09:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.242 04:09:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.242 04:09:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.242 04:09:53 -- paths/export.sh@5 -- # export PATH 00:18:39.242 04:09:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.242 04:09:53 -- nvmf/common.sh@47 -- # : 0 00:18:39.242 04:09:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.242 04:09:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.242 04:09:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.242 04:09:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.242 04:09:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.242 04:09:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.242 04:09:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.242 04:09:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.242 04:09:53 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:18:39.242 04:09:53 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:18:39.242 04:09:53 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:18:39.242 04:09:53 -- host/dma.sh@18 -- # subsystem=0 00:18:39.242 04:09:53 -- host/dma.sh@93 -- # nvmftestinit 00:18:39.242 04:09:53 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:39.242 04:09:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.242 04:09:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:39.242 04:09:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:39.242 04:09:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:39.242 04:09:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.242 04:09:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.242 04:09:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.242 04:09:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:39.242 04:09:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:39.242 04:09:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:39.242 04:09:53 -- common/autotest_common.sh@10 -- # set +x 00:18:44.516 04:09:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:44.516 04:09:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:44.516 04:09:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:44.516 04:09:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:44.516 04:09:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:44.516 04:09:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:44.516 04:09:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:44.516 04:09:58 -- nvmf/common.sh@295 -- # net_devs=() 00:18:44.516 04:09:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:44.516 04:09:58 -- nvmf/common.sh@296 -- # e810=() 00:18:44.516 04:09:58 -- nvmf/common.sh@296 -- # local -ga e810 00:18:44.516 04:09:58 -- nvmf/common.sh@297 -- # x722=() 00:18:44.516 04:09:58 -- nvmf/common.sh@297 -- # local -ga x722 00:18:44.516 04:09:58 -- nvmf/common.sh@298 -- # mlx=() 00:18:44.516 04:09:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:44.516 04:09:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.516 04:09:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.516 04:09:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.517 04:09:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.517 04:09:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.517 04:09:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.517 04:09:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.517 04:09:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.517 04:09:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.517 04:09:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.517 04:09:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.517 04:09:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:44.517 04:09:58 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:44.517 04:09:58 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:44.517 04:09:58 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:44.517 04:09:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:44.517 04:09:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.517 04:09:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:44.517 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:44.517 04:09:58 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.517 04:09:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.517 04:09:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:44.517 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:44.517 04:09:58 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:44.517 04:09:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:44.517 04:09:58 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.517 04:09:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.517 04:09:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:44.517 04:09:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.517 04:09:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:44.517 Found net devices under 0000:18:00.0: mlx_0_0 00:18:44.517 04:09:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.517 04:09:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.517 04:09:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.517 04:09:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:44.517 04:09:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.517 04:09:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:44.517 Found net devices under 0000:18:00.1: mlx_0_1 00:18:44.517 04:09:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.517 04:09:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:44.517 04:09:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:44.517 04:09:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:44.517 04:09:58 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:44.517 04:09:58 -- nvmf/common.sh@58 -- # uname 00:18:44.517 04:09:58 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:44.517 04:09:58 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:44.517 04:09:58 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:44.517 04:09:58 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:44.517 04:09:58 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:44.517 04:09:58 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:44.517 04:09:58 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:44.517 04:09:58 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:44.517 04:09:58 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:44.517 04:09:58 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:44.517 04:09:58 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:44.517 04:09:58 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:44.517 04:09:58 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:44.517 04:09:58 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:44.517 04:09:58 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:44.517 04:09:58 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:44.517 04:09:58 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:44.517 04:09:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.517 04:09:58 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:44.517 04:09:58 -- nvmf/common.sh@105 -- # continue 2 00:18:44.517 04:09:58 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:44.517 04:09:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.517 04:09:58 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.517 04:09:58 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:44.517 04:09:58 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:44.517 04:09:58 -- nvmf/common.sh@105 -- # continue 2 00:18:44.517 04:09:58 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:44.517 04:09:58 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:44.517 04:09:58 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:44.517 04:09:58 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:44.517 04:09:58 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:44.517 04:09:58 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:44.517 04:09:59 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:44.517 04:09:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:44.517 04:09:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:44.517 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:44.517 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:44.517 altname enp24s0f0np0 00:18:44.517 altname ens785f0np0 00:18:44.517 inet 192.168.100.8/24 scope global mlx_0_0 00:18:44.517 valid_lft forever preferred_lft forever 00:18:44.517 04:09:59 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:44.517 04:09:59 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:44.517 04:09:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:44.517 04:09:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:44.517 04:09:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:44.517 04:09:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:44.517 04:09:59 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:44.517 04:09:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:44.517 04:09:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:44.517 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:44.517 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:44.517 altname enp24s0f1np1 00:18:44.517 altname ens785f1np1 00:18:44.517 inet 192.168.100.9/24 scope global mlx_0_1 00:18:44.517 valid_lft forever preferred_lft forever 00:18:44.517 04:09:59 -- nvmf/common.sh@411 -- # return 0 00:18:44.517 04:09:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:44.517 04:09:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:44.517 04:09:59 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:44.517 04:09:59 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:44.517 04:09:59 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:44.517 04:09:59 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:44.517 04:09:59 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:44.517 04:09:59 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:44.517 04:09:59 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:44.776 04:09:59 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:44.776 04:09:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:44.776 04:09:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.776 04:09:59 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:44.776 04:09:59 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:44.776 04:09:59 -- nvmf/common.sh@105 -- # continue 2 00:18:44.776 04:09:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:44.776 04:09:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.776 04:09:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:44.776 04:09:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:44.776 04:09:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:44.776 04:09:59 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:44.776 04:09:59 -- nvmf/common.sh@105 -- # continue 2 00:18:44.776 04:09:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:44.776 04:09:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:44.776 04:09:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:44.776 04:09:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:44.776 04:09:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:44.776 04:09:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:44.776 04:09:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:44.776 04:09:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:44.776 04:09:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:44.776 04:09:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:44.776 04:09:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:44.776 04:09:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:44.776 04:09:59 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:44.776 192.168.100.9' 00:18:44.776 04:09:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:44.776 192.168.100.9' 00:18:44.776 04:09:59 -- nvmf/common.sh@446 -- # head -n 1 00:18:44.776 04:09:59 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:44.776 04:09:59 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:44.776 192.168.100.9' 00:18:44.776 04:09:59 -- nvmf/common.sh@447 -- # head -n 1 00:18:44.776 04:09:59 -- nvmf/common.sh@447 -- # tail -n +2 00:18:44.776 04:09:59 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:44.776 04:09:59 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:44.776 04:09:59 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:44.776 04:09:59 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:44.776 04:09:59 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:44.776 04:09:59 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:44.776 04:09:59 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:18:44.776 04:09:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:44.776 04:09:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:44.776 04:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:44.776 04:09:59 -- nvmf/common.sh@470 -- # nvmfpid=351457 00:18:44.776 04:09:59 -- nvmf/common.sh@471 -- # waitforlisten 351457 00:18:44.776 04:09:59 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:44.776 04:09:59 -- common/autotest_common.sh@817 -- # '[' -z 351457 ']' 00:18:44.776 04:09:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.776 04:09:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:44.776 04:09:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.776 04:09:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:44.776 04:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:44.776 [2024-04-19 04:09:59.168696] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:44.776 [2024-04-19 04:09:59.168746] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.776 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.776 [2024-04-19 04:09:59.221296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:44.776 [2024-04-19 04:09:59.293448] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.776 [2024-04-19 04:09:59.293484] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.776 [2024-04-19 04:09:59.293490] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.776 [2024-04-19 04:09:59.293496] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.776 [2024-04-19 04:09:59.293500] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.776 [2024-04-19 04:09:59.293534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.776 [2024-04-19 04:09:59.293538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.712 04:09:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:45.712 04:09:59 -- common/autotest_common.sh@850 -- # return 0 00:18:45.712 04:09:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:45.712 04:09:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:45.712 04:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.712 04:09:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.712 04:09:59 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:18:45.712 04:09:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.712 04:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.712 [2024-04-19 04:09:59.990520] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x135d060/0x1361550) succeed. 00:18:45.712 [2024-04-19 04:09:59.998467] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x135e560/0x13a2be0) succeed. 00:18:45.712 04:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.712 04:10:00 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:18:45.712 04:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.712 04:10:00 -- common/autotest_common.sh@10 -- # set +x 00:18:45.712 Malloc0 00:18:45.712 04:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.712 04:10:00 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:45.712 04:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.712 04:10:00 -- common/autotest_common.sh@10 -- # set +x 00:18:45.712 04:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.712 04:10:00 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:18:45.712 04:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.712 04:10:00 -- common/autotest_common.sh@10 -- # set +x 00:18:45.712 04:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.712 04:10:00 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:45.712 04:10:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.712 04:10:00 -- common/autotest_common.sh@10 -- # set +x 00:18:45.712 [2024-04-19 04:10:00.152717] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:45.712 04:10:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.712 04:10:00 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:18:45.712 04:10:00 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:18:45.712 04:10:00 -- nvmf/common.sh@521 -- # config=() 00:18:45.712 04:10:00 -- nvmf/common.sh@521 -- # local subsystem config 00:18:45.712 04:10:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:45.712 04:10:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:45.712 { 00:18:45.712 "params": { 00:18:45.712 "name": "Nvme$subsystem", 00:18:45.712 "trtype": "$TEST_TRANSPORT", 00:18:45.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:45.712 "adrfam": "ipv4", 00:18:45.712 "trsvcid": "$NVMF_PORT", 00:18:45.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:45.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:45.712 "hdgst": ${hdgst:-false}, 00:18:45.712 "ddgst": ${ddgst:-false} 00:18:45.712 }, 00:18:45.712 "method": "bdev_nvme_attach_controller" 00:18:45.712 } 00:18:45.712 EOF 00:18:45.712 )") 00:18:45.712 04:10:00 -- nvmf/common.sh@543 -- # cat 00:18:45.712 04:10:00 -- nvmf/common.sh@545 -- # jq . 00:18:45.712 04:10:00 -- nvmf/common.sh@546 -- # IFS=, 00:18:45.712 04:10:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:45.712 "params": { 00:18:45.712 "name": "Nvme0", 00:18:45.712 "trtype": "rdma", 00:18:45.712 "traddr": "192.168.100.8", 00:18:45.712 "adrfam": "ipv4", 00:18:45.712 "trsvcid": "4420", 00:18:45.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:45.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:45.712 "hdgst": false, 00:18:45.713 "ddgst": false 00:18:45.713 }, 00:18:45.713 "method": "bdev_nvme_attach_controller" 00:18:45.713 }' 00:18:45.713 [2024-04-19 04:10:00.196284] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:45.713 [2024-04-19 04:10:00.196318] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351514 ] 00:18:45.713 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.971 [2024-04-19 04:10:00.244072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:45.971 [2024-04-19 04:10:00.312567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.971 [2024-04-19 04:10:00.312570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.235 bdev Nvme0n1 reports 1 memory domains 00:18:51.235 bdev Nvme0n1 supports RDMA memory domain 00:18:51.235 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:51.235 ========================================================================== 00:18:51.235 Latency [us] 00:18:51.235 IOPS MiB/s Average min max 00:18:51.235 Core 2: 23465.19 91.66 681.21 217.95 9507.65 00:18:51.235 Core 3: 23567.58 92.06 678.22 228.88 9471.03 00:18:51.235 ========================================================================== 00:18:51.235 Total : 47032.77 183.72 679.72 217.95 9507.65 00:18:51.235 00:18:51.235 Total operations: 235198, translate 235198 pull_push 0 memzero 0 00:18:51.235 04:10:05 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:18:51.235 04:10:05 -- host/dma.sh@107 -- # gen_malloc_json 00:18:51.235 04:10:05 -- host/dma.sh@21 -- # jq . 00:18:51.235 [2024-04-19 04:10:05.736485] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:51.235 [2024-04-19 04:10:05.736536] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352543 ] 00:18:51.235 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.493 [2024-04-19 04:10:05.784791] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:51.493 [2024-04-19 04:10:05.852141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.493 [2024-04-19 04:10:05.852144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.759 bdev Malloc0 reports 2 memory domains 00:18:56.759 bdev Malloc0 doesn't support RDMA memory domain 00:18:56.759 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:56.759 ========================================================================== 00:18:56.759 Latency [us] 00:18:56.759 IOPS MiB/s Average min max 00:18:56.759 Core 2: 15585.63 60.88 1025.89 433.17 2242.08 00:18:56.759 Core 3: 15711.59 61.37 1017.65 393.82 2041.28 00:18:56.759 ========================================================================== 00:18:56.759 Total : 31297.22 122.25 1021.75 393.82 2242.08 00:18:56.759 00:18:56.759 Total operations: 156536, translate 0 pull_push 626144 memzero 0 00:18:56.759 04:10:11 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:18:56.759 04:10:11 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:18:56.759 04:10:11 -- host/dma.sh@48 -- # local subsystem=0 00:18:56.759 04:10:11 -- host/dma.sh@50 -- # jq . 00:18:56.759 Ignoring -M option 00:18:56.759 [2024-04-19 04:10:11.212182] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:56.759 [2024-04-19 04:10:11.212237] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353587 ] 00:18:56.759 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.760 [2024-04-19 04:10:11.258340] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:57.018 [2024-04-19 04:10:11.324922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:57.018 [2024-04-19 04:10:11.324924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.018 [2024-04-19 04:10:11.529759] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:19:02.278 [2024-04-19 04:10:16.557498] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:19:02.278 bdev d723b8a2-c767-47f8-8e4e-a6be2f0c4cec reports 1 memory domains 00:19:02.278 bdev d723b8a2-c767-47f8-8e4e-a6be2f0c4cec supports RDMA memory domain 00:19:02.278 Initialization complete, running randread IO for 5 sec on 2 cores 00:19:02.278 ========================================================================== 00:19:02.278 Latency [us] 00:19:02.278 IOPS MiB/s Average min max 00:19:02.278 Core 2: 85483.57 333.92 186.49 65.64 2817.92 00:19:02.279 Core 3: 89337.15 348.97 178.42 68.65 2754.54 00:19:02.279 ========================================================================== 00:19:02.279 Total : 174820.72 682.89 182.36 65.64 2817.92 00:19:02.279 00:19:02.279 Total operations: 874198, translate 0 pull_push 0 memzero 874198 00:19:02.279 04:10:16 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:19:02.536 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.536 [2024-04-19 04:10:16.862727] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:05.064 Initializing NVMe Controllers 00:19:05.064 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:19:05.064 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:05.064 Initialization complete. Launching workers. 00:19:05.064 ======================================================== 00:19:05.064 Latency(us) 00:19:05.064 Device Information : IOPS MiB/s Average min max 00:19:05.064 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2032.00 7.94 7909.84 4925.43 8960.25 00:19:05.064 ======================================================== 00:19:05.064 Total : 2032.00 7.94 7909.84 4925.43 8960.25 00:19:05.064 00:19:05.064 04:10:19 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:19:05.064 04:10:19 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:19:05.064 04:10:19 -- host/dma.sh@48 -- # local subsystem=0 00:19:05.064 04:10:19 -- host/dma.sh@50 -- # jq . 00:19:05.064 [2024-04-19 04:10:19.189933] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:05.064 [2024-04-19 04:10:19.189981] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354904 ] 00:19:05.064 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.064 [2024-04-19 04:10:19.236307] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:05.064 [2024-04-19 04:10:19.305368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.064 [2024-04-19 04:10:19.305371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.064 [2024-04-19 04:10:19.508419] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:19:10.551 [2024-04-19 04:10:24.537683] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:19:10.551 bdev adbb59a2-2d8b-446f-9aa4-a5968903e0de reports 1 memory domains 00:19:10.551 bdev adbb59a2-2d8b-446f-9aa4-a5968903e0de supports RDMA memory domain 00:19:10.551 Initialization complete, running randrw IO for 5 sec on 2 cores 00:19:10.551 ========================================================================== 00:19:10.551 Latency [us] 00:19:10.551 IOPS MiB/s Average min max 00:19:10.551 Core 2: 20749.48 81.05 770.46 16.72 12264.32 00:19:10.551 Core 3: 20973.03 81.93 762.24 11.14 12036.02 00:19:10.551 ========================================================================== 00:19:10.551 Total : 41722.51 162.98 766.33 11.14 12264.32 00:19:10.551 00:19:10.551 Total operations: 208656, translate 208553 pull_push 0 memzero 103 00:19:10.551 04:10:24 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:19:10.551 04:10:24 -- host/dma.sh@120 -- # nvmftestfini 00:19:10.551 04:10:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:10.551 04:10:24 -- nvmf/common.sh@117 -- # sync 00:19:10.551 04:10:24 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:10.551 04:10:24 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:10.551 04:10:24 -- nvmf/common.sh@120 -- # set +e 00:19:10.551 04:10:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:10.551 04:10:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:10.551 rmmod nvme_rdma 00:19:10.551 rmmod nvme_fabrics 00:19:10.551 04:10:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:10.551 04:10:24 -- nvmf/common.sh@124 -- # set -e 00:19:10.551 04:10:24 -- nvmf/common.sh@125 -- # return 0 00:19:10.551 04:10:24 -- nvmf/common.sh@478 -- # '[' -n 351457 ']' 00:19:10.551 04:10:24 -- nvmf/common.sh@479 -- # killprocess 351457 00:19:10.551 04:10:24 -- common/autotest_common.sh@936 -- # '[' -z 351457 ']' 00:19:10.551 04:10:24 -- common/autotest_common.sh@940 -- # kill -0 351457 00:19:10.551 04:10:24 -- common/autotest_common.sh@941 -- # uname 00:19:10.551 04:10:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:10.551 04:10:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 351457 00:19:10.551 04:10:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:10.551 04:10:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:10.551 04:10:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 351457' 00:19:10.551 killing process with pid 351457 00:19:10.551 04:10:24 -- common/autotest_common.sh@955 -- # kill 351457 00:19:10.551 04:10:24 -- common/autotest_common.sh@960 -- # wait 351457 00:19:10.811 04:10:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:10.811 04:10:25 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:10.811 00:19:10.811 real 0m31.625s 00:19:10.811 user 1m36.084s 00:19:10.811 sys 0m5.104s 00:19:10.811 04:10:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:10.811 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:10.811 ************************************ 00:19:10.811 END TEST dma 00:19:10.811 ************************************ 00:19:10.811 04:10:25 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:10.811 04:10:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:10.811 04:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.811 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:11.071 ************************************ 00:19:11.071 START TEST nvmf_identify 00:19:11.071 ************************************ 00:19:11.071 04:10:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:19:11.071 * Looking for test storage... 00:19:11.071 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:11.071 04:10:25 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.071 04:10:25 -- nvmf/common.sh@7 -- # uname -s 00:19:11.071 04:10:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.071 04:10:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.071 04:10:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.071 04:10:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.071 04:10:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.071 04:10:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.071 04:10:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.071 04:10:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.071 04:10:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.071 04:10:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.071 04:10:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:19:11.071 04:10:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:19:11.071 04:10:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.071 04:10:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.071 04:10:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.071 04:10:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.071 04:10:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:11.071 04:10:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.071 04:10:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.071 04:10:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.071 04:10:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.071 04:10:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.071 04:10:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.071 04:10:25 -- paths/export.sh@5 -- # export PATH 00:19:11.071 04:10:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.071 04:10:25 -- nvmf/common.sh@47 -- # : 0 00:19:11.071 04:10:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:11.071 04:10:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:11.071 04:10:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.071 04:10:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.071 04:10:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.071 04:10:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:11.071 04:10:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:11.071 04:10:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:11.071 04:10:25 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:11.071 04:10:25 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:11.071 04:10:25 -- host/identify.sh@14 -- # nvmftestinit 00:19:11.071 04:10:25 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:11.071 04:10:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.071 04:10:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:11.071 04:10:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:11.071 04:10:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:11.071 04:10:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.071 04:10:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.071 04:10:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.071 04:10:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:11.071 04:10:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:11.071 04:10:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:11.071 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:16.346 04:10:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:16.346 04:10:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.346 04:10:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.346 04:10:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.346 04:10:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.346 04:10:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.346 04:10:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.346 04:10:30 -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.346 04:10:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.346 04:10:30 -- nvmf/common.sh@296 -- # e810=() 00:19:16.346 04:10:30 -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.346 04:10:30 -- nvmf/common.sh@297 -- # x722=() 00:19:16.346 04:10:30 -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.346 04:10:30 -- nvmf/common.sh@298 -- # mlx=() 00:19:16.346 04:10:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.346 04:10:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.346 04:10:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.346 04:10:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.346 04:10:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.346 04:10:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.346 04:10:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.346 04:10:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.346 04:10:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.346 04:10:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.346 04:10:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.346 04:10:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.346 04:10:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.346 04:10:30 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:16.346 04:10:30 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:16.346 04:10:30 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:16.346 04:10:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.346 04:10:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.346 04:10:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:16.346 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:16.346 04:10:30 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.346 04:10:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.346 04:10:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:16.346 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:16.346 04:10:30 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.346 04:10:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.346 04:10:30 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.346 04:10:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.346 04:10:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:16.346 04:10:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.346 04:10:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:16.346 Found net devices under 0000:18:00.0: mlx_0_0 00:19:16.346 04:10:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.346 04:10:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.346 04:10:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.346 04:10:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:16.346 04:10:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.346 04:10:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:16.346 Found net devices under 0000:18:00.1: mlx_0_1 00:19:16.346 04:10:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.346 04:10:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:16.346 04:10:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:16.346 04:10:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:16.346 04:10:30 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:16.346 04:10:30 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:16.346 04:10:30 -- nvmf/common.sh@58 -- # uname 00:19:16.346 04:10:30 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:16.347 04:10:30 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:16.347 04:10:30 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:16.347 04:10:30 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:16.347 04:10:30 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:16.347 04:10:30 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:16.347 04:10:30 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:16.347 04:10:30 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:16.347 04:10:30 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:16.347 04:10:30 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:16.347 04:10:30 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:16.347 04:10:30 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.347 04:10:30 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:16.347 04:10:30 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:16.347 04:10:30 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.347 04:10:30 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:16.347 04:10:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.347 04:10:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.347 04:10:30 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.347 04:10:30 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:16.347 04:10:30 -- nvmf/common.sh@105 -- # continue 2 00:19:16.347 04:10:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.347 04:10:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.347 04:10:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.347 04:10:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.347 04:10:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.347 04:10:30 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:16.347 04:10:30 -- nvmf/common.sh@105 -- # continue 2 00:19:16.347 04:10:30 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:16.347 04:10:30 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:16.347 04:10:30 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.347 04:10:30 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:16.347 04:10:30 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:16.347 04:10:30 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:16.347 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.347 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:19:16.347 altname enp24s0f0np0 00:19:16.347 altname ens785f0np0 00:19:16.347 inet 192.168.100.8/24 scope global mlx_0_0 00:19:16.347 valid_lft forever preferred_lft forever 00:19:16.347 04:10:30 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:16.347 04:10:30 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:16.347 04:10:30 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.347 04:10:30 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:16.347 04:10:30 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:16.347 04:10:30 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:16.347 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.347 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:19:16.347 altname enp24s0f1np1 00:19:16.347 altname ens785f1np1 00:19:16.347 inet 192.168.100.9/24 scope global mlx_0_1 00:19:16.347 valid_lft forever preferred_lft forever 00:19:16.347 04:10:30 -- nvmf/common.sh@411 -- # return 0 00:19:16.347 04:10:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:16.347 04:10:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:16.347 04:10:30 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:16.347 04:10:30 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:16.347 04:10:30 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:16.347 04:10:30 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.347 04:10:30 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:16.347 04:10:30 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:16.347 04:10:30 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.347 04:10:30 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:16.347 04:10:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.347 04:10:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.347 04:10:30 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.347 04:10:30 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:16.347 04:10:30 -- nvmf/common.sh@105 -- # continue 2 00:19:16.347 04:10:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.347 04:10:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.347 04:10:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.347 04:10:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.347 04:10:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.347 04:10:30 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:16.347 04:10:30 -- nvmf/common.sh@105 -- # continue 2 00:19:16.347 04:10:30 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:16.347 04:10:30 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:16.347 04:10:30 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.347 04:10:30 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:16.347 04:10:30 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:16.347 04:10:30 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.347 04:10:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.347 04:10:30 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:16.347 192.168.100.9' 00:19:16.347 04:10:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:16.347 192.168.100.9' 00:19:16.347 04:10:30 -- nvmf/common.sh@446 -- # head -n 1 00:19:16.347 04:10:30 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:16.347 04:10:30 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:16.347 192.168.100.9' 00:19:16.347 04:10:30 -- nvmf/common.sh@447 -- # tail -n +2 00:19:16.347 04:10:30 -- nvmf/common.sh@447 -- # head -n 1 00:19:16.347 04:10:30 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:16.347 04:10:30 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:16.347 04:10:30 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:16.347 04:10:30 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:16.347 04:10:30 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:16.347 04:10:30 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:16.347 04:10:30 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:16.347 04:10:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:16.347 04:10:30 -- common/autotest_common.sh@10 -- # set +x 00:19:16.347 04:10:30 -- host/identify.sh@19 -- # nvmfpid=359203 00:19:16.347 04:10:30 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:16.347 04:10:30 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:16.347 04:10:30 -- host/identify.sh@23 -- # waitforlisten 359203 00:19:16.347 04:10:30 -- common/autotest_common.sh@817 -- # '[' -z 359203 ']' 00:19:16.347 04:10:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.347 04:10:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:16.347 04:10:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.347 04:10:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:16.347 04:10:30 -- common/autotest_common.sh@10 -- # set +x 00:19:16.347 [2024-04-19 04:10:30.835875] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:16.347 [2024-04-19 04:10:30.835927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.347 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.606 [2024-04-19 04:10:30.888657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.606 [2024-04-19 04:10:30.965676] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.606 [2024-04-19 04:10:30.965711] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.606 [2024-04-19 04:10:30.965717] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.606 [2024-04-19 04:10:30.965723] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.606 [2024-04-19 04:10:30.965727] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.606 [2024-04-19 04:10:30.965768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.606 [2024-04-19 04:10:30.965835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.606 [2024-04-19 04:10:30.965931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.606 [2024-04-19 04:10:30.965932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.171 04:10:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:17.171 04:10:31 -- common/autotest_common.sh@850 -- # return 0 00:19:17.171 04:10:31 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:17.171 04:10:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.171 04:10:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.171 [2024-04-19 04:10:31.646951] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfa06c0/0xfa4bb0) succeed. 00:19:17.171 [2024-04-19 04:10:31.656087] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfa1cb0/0xfe6240) succeed. 00:19:17.431 04:10:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.431 04:10:31 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:17.431 04:10:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:17.431 04:10:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.431 04:10:31 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:17.431 04:10:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.431 04:10:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.431 Malloc0 00:19:17.431 04:10:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.431 04:10:31 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:17.431 04:10:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.431 04:10:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.431 04:10:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.431 04:10:31 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:17.431 04:10:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.431 04:10:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.431 04:10:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.431 04:10:31 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:17.431 04:10:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.431 04:10:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.431 [2024-04-19 04:10:31.845851] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:17.431 04:10:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.431 04:10:31 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:17.431 04:10:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.431 04:10:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.431 04:10:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.431 04:10:31 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:17.431 04:10:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.431 04:10:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.431 [2024-04-19 04:10:31.861546] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:17.431 [ 00:19:17.431 { 00:19:17.431 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:17.431 "subtype": "Discovery", 00:19:17.431 "listen_addresses": [ 00:19:17.431 { 00:19:17.431 "transport": "RDMA", 00:19:17.431 "trtype": "RDMA", 00:19:17.431 "adrfam": "IPv4", 00:19:17.431 "traddr": "192.168.100.8", 00:19:17.431 "trsvcid": "4420" 00:19:17.431 } 00:19:17.431 ], 00:19:17.431 "allow_any_host": true, 00:19:17.431 "hosts": [] 00:19:17.431 }, 00:19:17.431 { 00:19:17.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.431 "subtype": "NVMe", 00:19:17.431 "listen_addresses": [ 00:19:17.431 { 00:19:17.431 "transport": "RDMA", 00:19:17.431 "trtype": "RDMA", 00:19:17.431 "adrfam": "IPv4", 00:19:17.431 "traddr": "192.168.100.8", 00:19:17.431 "trsvcid": "4420" 00:19:17.431 } 00:19:17.431 ], 00:19:17.431 "allow_any_host": true, 00:19:17.431 "hosts": [], 00:19:17.431 "serial_number": "SPDK00000000000001", 00:19:17.431 "model_number": "SPDK bdev Controller", 00:19:17.431 "max_namespaces": 32, 00:19:17.431 "min_cntlid": 1, 00:19:17.431 "max_cntlid": 65519, 00:19:17.431 "namespaces": [ 00:19:17.431 { 00:19:17.431 "nsid": 1, 00:19:17.431 "bdev_name": "Malloc0", 00:19:17.431 "name": "Malloc0", 00:19:17.431 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:17.431 "eui64": "ABCDEF0123456789", 00:19:17.431 "uuid": "ae20b4e9-e2c1-4faf-b5b8-c210d11ec713" 00:19:17.431 } 00:19:17.431 ] 00:19:17.431 } 00:19:17.431 ] 00:19:17.431 04:10:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.431 04:10:31 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:17.431 [2024-04-19 04:10:31.897816] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:17.431 [2024-04-19 04:10:31.897862] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359485 ] 00:19:17.431 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.431 [2024-04-19 04:10:31.938074] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:17.431 [2024-04-19 04:10:31.938144] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:19:17.431 [2024-04-19 04:10:31.938156] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:19:17.431 [2024-04-19 04:10:31.938159] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:19:17.431 [2024-04-19 04:10:31.938184] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:17.431 [2024-04-19 04:10:31.948955] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:19:17.700 [2024-04-19 04:10:31.962628] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:17.700 [2024-04-19 04:10:31.962639] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:19:17.700 [2024-04-19 04:10:31.962645] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962652] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962657] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962660] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962668] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962672] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962677] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962682] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962686] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962690] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962694] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962698] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962702] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962706] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962710] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962716] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962721] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962725] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183800 00:19:17.700 [2024-04-19 04:10:31.962728] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962732] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962736] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962740] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962744] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962748] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962752] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962757] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962763] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962768] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962773] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962777] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962781] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962784] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:19:17.701 [2024-04-19 04:10:31.962789] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:17.701 [2024-04-19 04:10:31.962791] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:19:17.701 [2024-04-19 04:10:31.962807] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.962818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf200 len:0x400 key:0x183800 00:19:17.701 [2024-04-19 04:10:31.968405] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.701 [2024-04-19 04:10:31.968414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:17.701 [2024-04-19 04:10:31.968419] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968424] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:17.701 [2024-04-19 04:10:31.968429] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:17.701 [2024-04-19 04:10:31.968433] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:17.701 [2024-04-19 04:10:31.968445] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.701 [2024-04-19 04:10:31.968476] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.701 [2024-04-19 04:10:31.968480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:19:17.701 [2024-04-19 04:10:31.968486] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:17.701 [2024-04-19 04:10:31.968490] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968494] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:17.701 [2024-04-19 04:10:31.968500] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.701 [2024-04-19 04:10:31.968518] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.701 [2024-04-19 04:10:31.968522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:19:17.701 [2024-04-19 04:10:31.968526] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:17.701 [2024-04-19 04:10:31.968530] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968535] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:17.701 [2024-04-19 04:10:31.968540] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.701 [2024-04-19 04:10:31.968561] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.701 [2024-04-19 04:10:31.968565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:17.701 [2024-04-19 04:10:31.968569] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:17.701 [2024-04-19 04:10:31.968573] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968579] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.701 [2024-04-19 04:10:31.968602] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.701 [2024-04-19 04:10:31.968606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:17.701 [2024-04-19 04:10:31.968610] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:17.701 [2024-04-19 04:10:31.968613] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:17.701 [2024-04-19 04:10:31.968617] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968621] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:17.701 [2024-04-19 04:10:31.968725] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:17.701 [2024-04-19 04:10:31.968729] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:17.701 [2024-04-19 04:10:31.968736] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.701 [2024-04-19 04:10:31.968758] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.701 [2024-04-19 04:10:31.968762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:17.701 [2024-04-19 04:10:31.968766] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:17.701 [2024-04-19 04:10:31.968770] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968775] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.701 [2024-04-19 04:10:31.968795] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.701 [2024-04-19 04:10:31.968799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:17.701 [2024-04-19 04:10:31.968803] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:17.701 [2024-04-19 04:10:31.968807] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:17.701 [2024-04-19 04:10:31.968810] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968815] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:17.701 [2024-04-19 04:10:31.968821] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:17.701 [2024-04-19 04:10:31.968828] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:19:17.701 [2024-04-19 04:10:31.968869] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.701 [2024-04-19 04:10:31.968874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:17.701 [2024-04-19 04:10:31.968880] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:17.701 [2024-04-19 04:10:31.968883] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:17.701 [2024-04-19 04:10:31.968887] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:17.701 [2024-04-19 04:10:31.968892] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:17.701 [2024-04-19 04:10:31.968896] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:17.701 [2024-04-19 04:10:31.968899] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:17.701 [2024-04-19 04:10:31.968903] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968908] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:17.701 [2024-04-19 04:10:31.968913] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.701 [2024-04-19 04:10:31.968936] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.701 [2024-04-19 04:10:31.968940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:17.701 [2024-04-19 04:10:31.968947] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0500 length 0x40 lkey 0x183800 00:19:17.701 [2024-04-19 04:10:31.968952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.701 [2024-04-19 04:10:31.968957] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0640 length 0x40 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.968962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.702 [2024-04-19 04:10:31.968967] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.968972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.702 [2024-04-19 04:10:31.968976] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.968980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.702 [2024-04-19 04:10:31.968984] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:17.702 [2024-04-19 04:10:31.968987] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.968995] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:17.702 [2024-04-19 04:10:31.969000] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.969005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.702 [2024-04-19 04:10:31.969020] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.702 [2024-04-19 04:10:31.969026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:19:17.702 [2024-04-19 04:10:31.969030] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:17.702 [2024-04-19 04:10:31.969034] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:17.702 [2024-04-19 04:10:31.969038] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.969044] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.969049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:19:17.702 [2024-04-19 04:10:31.969074] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.702 [2024-04-19 04:10:31.969077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:17.702 [2024-04-19 04:10:31.969082] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.969089] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:17.702 [2024-04-19 04:10:31.969104] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.969109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183800 00:19:17.702 [2024-04-19 04:10:31.969115] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.969119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.702 [2024-04-19 04:10:31.969136] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.702 [2024-04-19 04:10:31.969140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:17.702 [2024-04-19 04:10:31.969147] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b40 length 0x40 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.969153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183800 00:19:17.702 [2024-04-19 04:10:31.969156] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.969161] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.702 [2024-04-19 04:10:31.969164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:17.702 [2024-04-19 04:10:31.969168] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.969186] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.702 [2024-04-19 04:10:31.969190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:17.702 [2024-04-19 04:10:31.969197] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.969203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183800 00:19:17.702 [2024-04-19 04:10:31.969207] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183800 00:19:17.702 [2024-04-19 04:10:31.969228] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.702 [2024-04-19 04:10:31.969234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:17.702 [2024-04-19 04:10:31.969242] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183800 00:19:17.702 ===================================================== 00:19:17.702 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:17.702 ===================================================== 00:19:17.702 Controller Capabilities/Features 00:19:17.702 ================================ 00:19:17.702 Vendor ID: 0000 00:19:17.702 Subsystem Vendor ID: 0000 00:19:17.702 Serial Number: .................... 00:19:17.702 Model Number: ........................................ 00:19:17.702 Firmware Version: 24.05 00:19:17.702 Recommended Arb Burst: 0 00:19:17.702 IEEE OUI Identifier: 00 00 00 00:19:17.702 Multi-path I/O 00:19:17.702 May have multiple subsystem ports: No 00:19:17.702 May have multiple controllers: No 00:19:17.702 Associated with SR-IOV VF: No 00:19:17.702 Max Data Transfer Size: 131072 00:19:17.702 Max Number of Namespaces: 0 00:19:17.702 Max Number of I/O Queues: 1024 00:19:17.702 NVMe Specification Version (VS): 1.3 00:19:17.702 NVMe Specification Version (Identify): 1.3 00:19:17.702 Maximum Queue Entries: 128 00:19:17.702 Contiguous Queues Required: Yes 00:19:17.702 Arbitration Mechanisms Supported 00:19:17.702 Weighted Round Robin: Not Supported 00:19:17.702 Vendor Specific: Not Supported 00:19:17.702 Reset Timeout: 15000 ms 00:19:17.702 Doorbell Stride: 4 bytes 00:19:17.702 NVM Subsystem Reset: Not Supported 00:19:17.702 Command Sets Supported 00:19:17.702 NVM Command Set: Supported 00:19:17.702 Boot Partition: Not Supported 00:19:17.702 Memory Page Size Minimum: 4096 bytes 00:19:17.702 Memory Page Size Maximum: 4096 bytes 00:19:17.702 Persistent Memory Region: Not Supported 00:19:17.702 Optional Asynchronous Events Supported 00:19:17.702 Namespace Attribute Notices: Not Supported 00:19:17.702 Firmware Activation Notices: Not Supported 00:19:17.702 ANA Change Notices: Not Supported 00:19:17.702 PLE Aggregate Log Change Notices: Not Supported 00:19:17.702 LBA Status Info Alert Notices: Not Supported 00:19:17.702 EGE Aggregate Log Change Notices: Not Supported 00:19:17.702 Normal NVM Subsystem Shutdown event: Not Supported 00:19:17.702 Zone Descriptor Change Notices: Not Supported 00:19:17.702 Discovery Log Change Notices: Supported 00:19:17.702 Controller Attributes 00:19:17.702 128-bit Host Identifier: Not Supported 00:19:17.702 Non-Operational Permissive Mode: Not Supported 00:19:17.702 NVM Sets: Not Supported 00:19:17.702 Read Recovery Levels: Not Supported 00:19:17.702 Endurance Groups: Not Supported 00:19:17.702 Predictable Latency Mode: Not Supported 00:19:17.702 Traffic Based Keep ALive: Not Supported 00:19:17.702 Namespace Granularity: Not Supported 00:19:17.702 SQ Associations: Not Supported 00:19:17.702 UUID List: Not Supported 00:19:17.702 Multi-Domain Subsystem: Not Supported 00:19:17.702 Fixed Capacity Management: Not Supported 00:19:17.702 Variable Capacity Management: Not Supported 00:19:17.702 Delete Endurance Group: Not Supported 00:19:17.702 Delete NVM Set: Not Supported 00:19:17.702 Extended LBA Formats Supported: Not Supported 00:19:17.702 Flexible Data Placement Supported: Not Supported 00:19:17.702 00:19:17.702 Controller Memory Buffer Support 00:19:17.702 ================================ 00:19:17.702 Supported: No 00:19:17.702 00:19:17.702 Persistent Memory Region Support 00:19:17.702 ================================ 00:19:17.702 Supported: No 00:19:17.702 00:19:17.702 Admin Command Set Attributes 00:19:17.702 ============================ 00:19:17.702 Security Send/Receive: Not Supported 00:19:17.702 Format NVM: Not Supported 00:19:17.702 Firmware Activate/Download: Not Supported 00:19:17.702 Namespace Management: Not Supported 00:19:17.702 Device Self-Test: Not Supported 00:19:17.702 Directives: Not Supported 00:19:17.702 NVMe-MI: Not Supported 00:19:17.702 Virtualization Management: Not Supported 00:19:17.702 Doorbell Buffer Config: Not Supported 00:19:17.702 Get LBA Status Capability: Not Supported 00:19:17.702 Command & Feature Lockdown Capability: Not Supported 00:19:17.702 Abort Command Limit: 1 00:19:17.702 Async Event Request Limit: 4 00:19:17.702 Number of Firmware Slots: N/A 00:19:17.702 Firmware Slot 1 Read-Only: N/A 00:19:17.702 Firmware Activation Without Reset: N/A 00:19:17.702 Multiple Update Detection Support: N/A 00:19:17.702 Firmware Update Granularity: No Information Provided 00:19:17.702 Per-Namespace SMART Log: No 00:19:17.702 Asymmetric Namespace Access Log Page: Not Supported 00:19:17.702 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:17.702 Command Effects Log Page: Not Supported 00:19:17.702 Get Log Page Extended Data: Supported 00:19:17.702 Telemetry Log Pages: Not Supported 00:19:17.702 Persistent Event Log Pages: Not Supported 00:19:17.702 Supported Log Pages Log Page: May Support 00:19:17.702 Commands Supported & Effects Log Page: Not Supported 00:19:17.703 Feature Identifiers & Effects Log Page:May Support 00:19:17.703 NVMe-MI Commands & Effects Log Page: May Support 00:19:17.703 Data Area 4 for Telemetry Log: Not Supported 00:19:17.703 Error Log Page Entries Supported: 128 00:19:17.703 Keep Alive: Not Supported 00:19:17.703 00:19:17.703 NVM Command Set Attributes 00:19:17.703 ========================== 00:19:17.703 Submission Queue Entry Size 00:19:17.703 Max: 1 00:19:17.703 Min: 1 00:19:17.703 Completion Queue Entry Size 00:19:17.703 Max: 1 00:19:17.703 Min: 1 00:19:17.703 Number of Namespaces: 0 00:19:17.703 Compare Command: Not Supported 00:19:17.703 Write Uncorrectable Command: Not Supported 00:19:17.703 Dataset Management Command: Not Supported 00:19:17.703 Write Zeroes Command: Not Supported 00:19:17.703 Set Features Save Field: Not Supported 00:19:17.703 Reservations: Not Supported 00:19:17.703 Timestamp: Not Supported 00:19:17.703 Copy: Not Supported 00:19:17.703 Volatile Write Cache: Not Present 00:19:17.703 Atomic Write Unit (Normal): 1 00:19:17.703 Atomic Write Unit (PFail): 1 00:19:17.703 Atomic Compare & Write Unit: 1 00:19:17.703 Fused Compare & Write: Supported 00:19:17.703 Scatter-Gather List 00:19:17.703 SGL Command Set: Supported 00:19:17.703 SGL Keyed: Supported 00:19:17.703 SGL Bit Bucket Descriptor: Not Supported 00:19:17.703 SGL Metadata Pointer: Not Supported 00:19:17.703 Oversized SGL: Not Supported 00:19:17.703 SGL Metadata Address: Not Supported 00:19:17.703 SGL Offset: Supported 00:19:17.703 Transport SGL Data Block: Not Supported 00:19:17.703 Replay Protected Memory Block: Not Supported 00:19:17.703 00:19:17.703 Firmware Slot Information 00:19:17.703 ========================= 00:19:17.703 Active slot: 0 00:19:17.703 00:19:17.703 00:19:17.703 Error Log 00:19:17.703 ========= 00:19:17.703 00:19:17.703 Active Namespaces 00:19:17.703 ================= 00:19:17.703 Discovery Log Page 00:19:17.703 ================== 00:19:17.703 Generation Counter: 2 00:19:17.703 Number of Records: 2 00:19:17.703 Record Format: 0 00:19:17.703 00:19:17.703 Discovery Log Entry 0 00:19:17.703 ---------------------- 00:19:17.703 Transport Type: 1 (RDMA) 00:19:17.703 Address Family: 1 (IPv4) 00:19:17.703 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:17.703 Entry Flags: 00:19:17.703 Duplicate Returned Information: 1 00:19:17.703 Explicit Persistent Connection Support for Discovery: 1 00:19:17.703 Transport Requirements: 00:19:17.703 Secure Channel: Not Required 00:19:17.703 Port ID: 0 (0x0000) 00:19:17.703 Controller ID: 65535 (0xffff) 00:19:17.703 Admin Max SQ Size: 128 00:19:17.703 Transport Service Identifier: 4420 00:19:17.703 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:17.703 Transport Address: 192.168.100.8 00:19:17.703 Transport Specific Address Subtype - RDMA 00:19:17.703 RDMA QP Service Type: 1 (Reliable Connected) 00:19:17.703 RDMA Provider Type: 1 (No provider specified) 00:19:17.703 RDMA CM Service: 1 (RDMA_CM) 00:19:17.703 Discovery Log Entry 1 00:19:17.703 ---------------------- 00:19:17.703 Transport Type: 1 (RDMA) 00:19:17.703 Address Family: 1 (IPv4) 00:19:17.703 Subsystem Type: 2 (NVM Subsystem) 00:19:17.703 Entry Flags: 00:19:17.703 Duplicate Returned Information: 0 00:19:17.703 Explicit Persistent Connection Support for Discovery: 0 00:19:17.703 Transport Requirements: 00:19:17.703 Secure Channel: Not Required 00:19:17.703 Port ID: 0 (0x0000) 00:19:17.703 Controller ID: 65535 (0xffff) 00:19:17.703 Admin Max SQ Size: [2024-04-19 04:10:31.969301] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:17.703 [2024-04-19 04:10:31.969309] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 48331 doesn't match qid 00:19:17.703 [2024-04-19 04:10:31.969320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:5 sqhd:1790 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969325] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 48331 doesn't match qid 00:19:17.703 [2024-04-19 04:10:31.969330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:5 sqhd:1790 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969334] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 48331 doesn't match qid 00:19:17.703 [2024-04-19 04:10:31.969339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:5 sqhd:1790 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969344] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 48331 doesn't match qid 00:19:17.703 [2024-04-19 04:10:31.969348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32594 cdw0:5 sqhd:1790 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969355] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.703 [2024-04-19 04:10:31.969374] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.703 [2024-04-19 04:10:31.969379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969386] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.703 [2024-04-19 04:10:31.969395] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969418] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.703 [2024-04-19 04:10:31.969423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969427] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:17.703 [2024-04-19 04:10:31.969430] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:17.703 [2024-04-19 04:10:31.969434] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969440] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.703 [2024-04-19 04:10:31.969463] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.703 [2024-04-19 04:10:31.969467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969472] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969478] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.703 [2024-04-19 04:10:31.969502] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.703 [2024-04-19 04:10:31.969506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969510] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969516] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.703 [2024-04-19 04:10:31.969538] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.703 [2024-04-19 04:10:31.969542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969547] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969554] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.703 [2024-04-19 04:10:31.969576] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.703 [2024-04-19 04:10:31.969580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969584] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969590] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.703 [2024-04-19 04:10:31.969609] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.703 [2024-04-19 04:10:31.969613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969617] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969623] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.703 [2024-04-19 04:10:31.969629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.703 [2024-04-19 04:10:31.969644] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.703 [2024-04-19 04:10:31.969648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:17.703 [2024-04-19 04:10:31.969652] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969658] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.969683] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.969687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.969691] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969698] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.969722] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.969727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.969731] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969737] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.969754] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.969758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.969762] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969768] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.969791] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.969795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.969799] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969805] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.969826] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.969830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.969834] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969840] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.969860] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.969864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.969868] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969874] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.969897] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.969901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.969905] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969912] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.969933] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.969937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.969941] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969947] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.969966] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.969970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.969974] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969980] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.969985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.970003] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.970006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.970010] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970016] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.970041] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.970044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.970048] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970054] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.970077] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.970081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.970085] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970091] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.970114] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.970117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.970121] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970129] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.970153] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.970157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.970161] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970167] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.970190] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.970193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.970197] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970203] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.970229] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.970233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.970237] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970243] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.970267] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.970271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.970275] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970281] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.704 [2024-04-19 04:10:31.970286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.704 [2024-04-19 04:10:31.970308] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.704 [2024-04-19 04:10:31.970312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:17.704 [2024-04-19 04:10:31.970315] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970321] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970348] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970357] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970363] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970383] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970391] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970397] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970430] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970437] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970443] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970469] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970477] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970483] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970501] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970509] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970515] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970532] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970540] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970546] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970565] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970574] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970580] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970600] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970609] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970616] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970639] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970646] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970652] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970677] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970684] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970690] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970709] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970717] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970723] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970744] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970752] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970758] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970779] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970789] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970795] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970815] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970822] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970828] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.705 [2024-04-19 04:10:31.970847] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.705 [2024-04-19 04:10:31.970851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:17.705 [2024-04-19 04:10:31.970855] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183800 00:19:17.705 [2024-04-19 04:10:31.970861] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.970866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.970885] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.970889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.970893] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.970899] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.970904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.970918] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.970921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.970925] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.970931] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.970937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.970950] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.970954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.970958] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.970964] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.970969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.970985] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.970990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.970994] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971000] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971026] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971034] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971040] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971064] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971071] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971077] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971100] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971108] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971114] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971139] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971147] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971153] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971176] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971183] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971189] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971210] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971218] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971224] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971250] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971257] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971264] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971286] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971294] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971300] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971319] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971326] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971332] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971358] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971365] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971372] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971394] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971406] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971412] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971433] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971441] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971446] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971469] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971477] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971483] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.706 [2024-04-19 04:10:31.971506] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.706 [2024-04-19 04:10:31.971510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:17.706 [2024-04-19 04:10:31.971514] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971519] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.706 [2024-04-19 04:10:31.971525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971542] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971550] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971556] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971579] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971586] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971592] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971611] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971619] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971625] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971653] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971661] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971667] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971691] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971699] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971705] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971724] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971732] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971738] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971756] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971764] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971770] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971791] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971799] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971805] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971824] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971832] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971838] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971863] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971871] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971877] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971900] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971908] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971914] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971937] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971944] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971950] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.971972] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.971976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.971979] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971985] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.971991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.972008] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.972012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.972016] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.972022] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.972027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.972046] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.972050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.972054] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.972060] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.972067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.972082] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.972085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.972089] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.972095] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.972101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.972117] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.972121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.972124] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.972131] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.972136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.972156] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.707 [2024-04-19 04:10:31.972160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:17.707 [2024-04-19 04:10:31.972164] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.972170] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.707 [2024-04-19 04:10:31.972175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.707 [2024-04-19 04:10:31.972191] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.708 [2024-04-19 04:10:31.972195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:17.708 [2024-04-19 04:10:31.972199] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972205] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.708 [2024-04-19 04:10:31.972230] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.708 [2024-04-19 04:10:31.972234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:17.708 [2024-04-19 04:10:31.972238] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972244] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.708 [2024-04-19 04:10:31.972267] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.708 [2024-04-19 04:10:31.972271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:17.708 [2024-04-19 04:10:31.972274] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972282] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.708 [2024-04-19 04:10:31.972304] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.708 [2024-04-19 04:10:31.972308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:17.708 [2024-04-19 04:10:31.972312] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972318] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.708 [2024-04-19 04:10:31.972342] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.708 [2024-04-19 04:10:31.972346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:17.708 [2024-04-19 04:10:31.972349] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972356] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.708 [2024-04-19 04:10:31.972378] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.708 [2024-04-19 04:10:31.972382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:17.708 [2024-04-19 04:10:31.972386] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972392] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.972397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.708 [2024-04-19 04:10:31.976408] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.708 [2024-04-19 04:10:31.976414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:17.708 [2024-04-19 04:10:31.976418] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.976424] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.976429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.708 [2024-04-19 04:10:31.976447] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.708 [2024-04-19 04:10:31.976451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0004 p:0 m:0 dnr:0 00:19:17.708 [2024-04-19 04:10:31.976454] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:31.976459] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:19:17.708 128 00:19:17.708 Transport Service Identifier: 4420 00:19:17.708 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:17.708 Transport Address: 192.168.100.8 00:19:17.708 Transport Specific Address Subtype - RDMA 00:19:17.708 RDMA QP Service Type: 1 (Reliable Connected) 00:19:17.708 RDMA Provider Type: 1 (No provider specified) 00:19:17.708 RDMA CM Service: 1 (RDMA_CM) 00:19:17.708 04:10:32 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:17.708 [2024-04-19 04:10:32.040270] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:17.708 [2024-04-19 04:10:32.040314] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359493 ] 00:19:17.708 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.708 [2024-04-19 04:10:32.078617] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:17.708 [2024-04-19 04:10:32.078678] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:19:17.708 [2024-04-19 04:10:32.078689] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:19:17.708 [2024-04-19 04:10:32.078693] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:19:17.708 [2024-04-19 04:10:32.078711] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:17.708 [2024-04-19 04:10:32.090731] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:19:17.708 [2024-04-19 04:10:32.104394] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:17.708 [2024-04-19 04:10:32.104408] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:19:17.708 [2024-04-19 04:10:32.104413] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104418] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104423] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104427] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104431] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104434] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104438] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104442] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104446] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104450] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104454] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104458] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104461] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104465] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104469] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104473] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104477] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104483] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104487] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104491] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104495] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104499] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104502] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104506] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104510] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104514] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104518] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104522] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104526] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104529] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104533] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104537] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:19:17.708 [2024-04-19 04:10:32.104540] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:19:17.708 [2024-04-19 04:10:32.104543] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:19:17.708 [2024-04-19 04:10:32.104554] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.708 [2024-04-19 04:10:32.104564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf200 len:0x400 key:0x183800 00:19:17.708 [2024-04-19 04:10:32.110405] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.708 [2024-04-19 04:10:32.110412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:17.709 [2024-04-19 04:10:32.110417] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110423] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:17.709 [2024-04-19 04:10:32.110428] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:17.709 [2024-04-19 04:10:32.110432] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:17.709 [2024-04-19 04:10:32.110442] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.709 [2024-04-19 04:10:32.110471] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.709 [2024-04-19 04:10:32.110475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:19:17.709 [2024-04-19 04:10:32.110482] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:17.709 [2024-04-19 04:10:32.110486] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110492] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:17.709 [2024-04-19 04:10:32.110498] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.709 [2024-04-19 04:10:32.110519] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.709 [2024-04-19 04:10:32.110523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:19:17.709 [2024-04-19 04:10:32.110527] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:17.709 [2024-04-19 04:10:32.110531] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110535] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:17.709 [2024-04-19 04:10:32.110540] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.709 [2024-04-19 04:10:32.110561] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.709 [2024-04-19 04:10:32.110565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:17.709 [2024-04-19 04:10:32.110569] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:17.709 [2024-04-19 04:10:32.110572] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110578] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.709 [2024-04-19 04:10:32.110600] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.709 [2024-04-19 04:10:32.110604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:17.709 [2024-04-19 04:10:32.110608] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:17.709 [2024-04-19 04:10:32.110612] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:17.709 [2024-04-19 04:10:32.110615] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110620] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:17.709 [2024-04-19 04:10:32.110724] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:17.709 [2024-04-19 04:10:32.110727] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:17.709 [2024-04-19 04:10:32.110733] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.709 [2024-04-19 04:10:32.110759] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.709 [2024-04-19 04:10:32.110763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:17.709 [2024-04-19 04:10:32.110768] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:17.709 [2024-04-19 04:10:32.110772] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110778] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.709 [2024-04-19 04:10:32.110803] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.709 [2024-04-19 04:10:32.110806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:17.709 [2024-04-19 04:10:32.110810] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:17.709 [2024-04-19 04:10:32.110814] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:17.709 [2024-04-19 04:10:32.110817] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110822] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:17.709 [2024-04-19 04:10:32.110830] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:17.709 [2024-04-19 04:10:32.110837] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:19:17.709 [2024-04-19 04:10:32.110877] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.709 [2024-04-19 04:10:32.110881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:17.709 [2024-04-19 04:10:32.110887] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:17.709 [2024-04-19 04:10:32.110890] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:17.709 [2024-04-19 04:10:32.110894] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:17.709 [2024-04-19 04:10:32.110899] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:17.709 [2024-04-19 04:10:32.110902] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:17.709 [2024-04-19 04:10:32.110906] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:17.709 [2024-04-19 04:10:32.110909] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110914] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:17.709 [2024-04-19 04:10:32.110919] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.709 [2024-04-19 04:10:32.110925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.710 [2024-04-19 04:10:32.110946] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.710 [2024-04-19 04:10:32.110950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:17.710 [2024-04-19 04:10:32.110955] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0500 length 0x40 lkey 0x183800 00:19:17.710 [2024-04-19 04:10:32.110961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.710 [2024-04-19 04:10:32.110966] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0640 length 0x40 lkey 0x183800 00:19:17.710 [2024-04-19 04:10:32.110970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.710 [2024-04-19 04:10:32.110975] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.710 [2024-04-19 04:10:32.110979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.710 [2024-04-19 04:10:32.110984] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x183800 00:19:17.710 [2024-04-19 04:10:32.110988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.710 [2024-04-19 04:10:32.110992] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:17.710 [2024-04-19 04:10:32.110996] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183800 00:19:17.710 [2024-04-19 04:10:32.111003] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:17.710 [2024-04-19 04:10:32.111008] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.710 [2024-04-19 04:10:32.111013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.710 [2024-04-19 04:10:32.111027] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.710 [2024-04-19 04:10:32.111030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:19:17.710 [2024-04-19 04:10:32.111034] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:17.710 [2024-04-19 04:10:32.111038] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:17.710 [2024-04-19 04:10:32.111042] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183800 00:19:17.710 [2024-04-19 04:10:32.111046] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:17.710 [2024-04-19 04:10:32.111051] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111056] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.711 [2024-04-19 04:10:32.111086] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111126] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111130] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111135] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111143] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183800 00:19:17.711 [2024-04-19 04:10:32.111173] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111185] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:17.711 [2024-04-19 04:10:32.111191] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111195] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111200] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111206] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:19:17.711 [2024-04-19 04:10:32.111238] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111249] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111253] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111259] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111264] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:19:17.711 [2024-04-19 04:10:32.111289] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111299] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111303] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111307] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111313] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111318] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111322] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111326] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:17.711 [2024-04-19 04:10:32.111329] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:17.711 [2024-04-19 04:10:32.111334] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:17.711 [2024-04-19 04:10:32.111345] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.711 [2024-04-19 04:10:32.111355] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.711 [2024-04-19 04:10:32.111367] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111375] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111381] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.711 [2024-04-19 04:10:32.111391] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111399] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111413] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111421] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111426] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.711 [2024-04-19 04:10:32.111452] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111459] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111465] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.711 [2024-04-19 04:10:32.111489] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111497] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111504] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183800 00:19:17.711 [2024-04-19 04:10:32.111517] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183800 00:19:17.711 [2024-04-19 04:10:32.111528] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b40 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183800 00:19:17.711 [2024-04-19 04:10:32.111539] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c80 length 0x40 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183800 00:19:17.711 [2024-04-19 04:10:32.111551] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111564] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111577] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111586] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111591] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111598] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183800 00:19:17.711 [2024-04-19 04:10:32.111602] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.711 [2024-04-19 04:10:32.111606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:17.711 [2024-04-19 04:10:32.111613] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183800 00:19:17.711 ===================================================== 00:19:17.711 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:17.711 ===================================================== 00:19:17.711 Controller Capabilities/Features 00:19:17.711 ================================ 00:19:17.711 Vendor ID: 8086 00:19:17.711 Subsystem Vendor ID: 8086 00:19:17.711 Serial Number: SPDK00000000000001 00:19:17.711 Model Number: SPDK bdev Controller 00:19:17.711 Firmware Version: 24.05 00:19:17.711 Recommended Arb Burst: 6 00:19:17.711 IEEE OUI Identifier: e4 d2 5c 00:19:17.711 Multi-path I/O 00:19:17.712 May have multiple subsystem ports: Yes 00:19:17.712 May have multiple controllers: Yes 00:19:17.712 Associated with SR-IOV VF: No 00:19:17.712 Max Data Transfer Size: 131072 00:19:17.712 Max Number of Namespaces: 32 00:19:17.712 Max Number of I/O Queues: 127 00:19:17.712 NVMe Specification Version (VS): 1.3 00:19:17.712 NVMe Specification Version (Identify): 1.3 00:19:17.712 Maximum Queue Entries: 128 00:19:17.712 Contiguous Queues Required: Yes 00:19:17.712 Arbitration Mechanisms Supported 00:19:17.712 Weighted Round Robin: Not Supported 00:19:17.712 Vendor Specific: Not Supported 00:19:17.712 Reset Timeout: 15000 ms 00:19:17.712 Doorbell Stride: 4 bytes 00:19:17.712 NVM Subsystem Reset: Not Supported 00:19:17.712 Command Sets Supported 00:19:17.712 NVM Command Set: Supported 00:19:17.712 Boot Partition: Not Supported 00:19:17.712 Memory Page Size Minimum: 4096 bytes 00:19:17.712 Memory Page Size Maximum: 4096 bytes 00:19:17.712 Persistent Memory Region: Not Supported 00:19:17.712 Optional Asynchronous Events Supported 00:19:17.712 Namespace Attribute Notices: Supported 00:19:17.712 Firmware Activation Notices: Not Supported 00:19:17.712 ANA Change Notices: Not Supported 00:19:17.712 PLE Aggregate Log Change Notices: Not Supported 00:19:17.712 LBA Status Info Alert Notices: Not Supported 00:19:17.712 EGE Aggregate Log Change Notices: Not Supported 00:19:17.712 Normal NVM Subsystem Shutdown event: Not Supported 00:19:17.712 Zone Descriptor Change Notices: Not Supported 00:19:17.712 Discovery Log Change Notices: Not Supported 00:19:17.712 Controller Attributes 00:19:17.712 128-bit Host Identifier: Supported 00:19:17.712 Non-Operational Permissive Mode: Not Supported 00:19:17.712 NVM Sets: Not Supported 00:19:17.712 Read Recovery Levels: Not Supported 00:19:17.712 Endurance Groups: Not Supported 00:19:17.712 Predictable Latency Mode: Not Supported 00:19:17.712 Traffic Based Keep ALive: Not Supported 00:19:17.712 Namespace Granularity: Not Supported 00:19:17.712 SQ Associations: Not Supported 00:19:17.712 UUID List: Not Supported 00:19:17.712 Multi-Domain Subsystem: Not Supported 00:19:17.712 Fixed Capacity Management: Not Supported 00:19:17.712 Variable Capacity Management: Not Supported 00:19:17.712 Delete Endurance Group: Not Supported 00:19:17.712 Delete NVM Set: Not Supported 00:19:17.712 Extended LBA Formats Supported: Not Supported 00:19:17.712 Flexible Data Placement Supported: Not Supported 00:19:17.712 00:19:17.712 Controller Memory Buffer Support 00:19:17.712 ================================ 00:19:17.712 Supported: No 00:19:17.712 00:19:17.712 Persistent Memory Region Support 00:19:17.712 ================================ 00:19:17.712 Supported: No 00:19:17.712 00:19:17.712 Admin Command Set Attributes 00:19:17.712 ============================ 00:19:17.712 Security Send/Receive: Not Supported 00:19:17.712 Format NVM: Not Supported 00:19:17.712 Firmware Activate/Download: Not Supported 00:19:17.712 Namespace Management: Not Supported 00:19:17.712 Device Self-Test: Not Supported 00:19:17.712 Directives: Not Supported 00:19:17.712 NVMe-MI: Not Supported 00:19:17.712 Virtualization Management: Not Supported 00:19:17.712 Doorbell Buffer Config: Not Supported 00:19:17.712 Get LBA Status Capability: Not Supported 00:19:17.712 Command & Feature Lockdown Capability: Not Supported 00:19:17.712 Abort Command Limit: 4 00:19:17.712 Async Event Request Limit: 4 00:19:17.712 Number of Firmware Slots: N/A 00:19:17.712 Firmware Slot 1 Read-Only: N/A 00:19:17.712 Firmware Activation Without Reset: N/A 00:19:17.712 Multiple Update Detection Support: N/A 00:19:17.712 Firmware Update Granularity: No Information Provided 00:19:17.712 Per-Namespace SMART Log: No 00:19:17.712 Asymmetric Namespace Access Log Page: Not Supported 00:19:17.712 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:17.712 Command Effects Log Page: Supported 00:19:17.712 Get Log Page Extended Data: Supported 00:19:17.712 Telemetry Log Pages: Not Supported 00:19:17.712 Persistent Event Log Pages: Not Supported 00:19:17.712 Supported Log Pages Log Page: May Support 00:19:17.712 Commands Supported & Effects Log Page: Not Supported 00:19:17.712 Feature Identifiers & Effects Log Page:May Support 00:19:17.712 NVMe-MI Commands & Effects Log Page: May Support 00:19:17.712 Data Area 4 for Telemetry Log: Not Supported 00:19:17.712 Error Log Page Entries Supported: 128 00:19:17.712 Keep Alive: Supported 00:19:17.712 Keep Alive Granularity: 10000 ms 00:19:17.712 00:19:17.712 NVM Command Set Attributes 00:19:17.712 ========================== 00:19:17.712 Submission Queue Entry Size 00:19:17.712 Max: 64 00:19:17.712 Min: 64 00:19:17.712 Completion Queue Entry Size 00:19:17.712 Max: 16 00:19:17.712 Min: 16 00:19:17.712 Number of Namespaces: 32 00:19:17.712 Compare Command: Supported 00:19:17.712 Write Uncorrectable Command: Not Supported 00:19:17.712 Dataset Management Command: Supported 00:19:17.712 Write Zeroes Command: Supported 00:19:17.712 Set Features Save Field: Not Supported 00:19:17.712 Reservations: Supported 00:19:17.712 Timestamp: Not Supported 00:19:17.712 Copy: Supported 00:19:17.712 Volatile Write Cache: Present 00:19:17.712 Atomic Write Unit (Normal): 1 00:19:17.712 Atomic Write Unit (PFail): 1 00:19:17.712 Atomic Compare & Write Unit: 1 00:19:17.712 Fused Compare & Write: Supported 00:19:17.712 Scatter-Gather List 00:19:17.712 SGL Command Set: Supported 00:19:17.712 SGL Keyed: Supported 00:19:17.712 SGL Bit Bucket Descriptor: Not Supported 00:19:17.712 SGL Metadata Pointer: Not Supported 00:19:17.712 Oversized SGL: Not Supported 00:19:17.712 SGL Metadata Address: Not Supported 00:19:17.712 SGL Offset: Supported 00:19:17.712 Transport SGL Data Block: Not Supported 00:19:17.712 Replay Protected Memory Block: Not Supported 00:19:17.712 00:19:17.712 Firmware Slot Information 00:19:17.712 ========================= 00:19:17.712 Active slot: 1 00:19:17.712 Slot 1 Firmware Revision: 24.05 00:19:17.712 00:19:17.712 00:19:17.712 Commands Supported and Effects 00:19:17.712 ============================== 00:19:17.712 Admin Commands 00:19:17.712 -------------- 00:19:17.712 Get Log Page (02h): Supported 00:19:17.712 Identify (06h): Supported 00:19:17.712 Abort (08h): Supported 00:19:17.712 Set Features (09h): Supported 00:19:17.712 Get Features (0Ah): Supported 00:19:17.712 Asynchronous Event Request (0Ch): Supported 00:19:17.712 Keep Alive (18h): Supported 00:19:17.712 I/O Commands 00:19:17.712 ------------ 00:19:17.712 Flush (00h): Supported LBA-Change 00:19:17.712 Write (01h): Supported LBA-Change 00:19:17.712 Read (02h): Supported 00:19:17.712 Compare (05h): Supported 00:19:17.712 Write Zeroes (08h): Supported LBA-Change 00:19:17.712 Dataset Management (09h): Supported LBA-Change 00:19:17.712 Copy (19h): Supported LBA-Change 00:19:17.712 Unknown (79h): Supported LBA-Change 00:19:17.712 Unknown (7Ah): Supported 00:19:17.712 00:19:17.712 Error Log 00:19:17.712 ========= 00:19:17.712 00:19:17.712 Arbitration 00:19:17.712 =========== 00:19:17.712 Arbitration Burst: 1 00:19:17.712 00:19:17.712 Power Management 00:19:17.712 ================ 00:19:17.712 Number of Power States: 1 00:19:17.712 Current Power State: Power State #0 00:19:17.712 Power State #0: 00:19:17.712 Max Power: 0.00 W 00:19:17.712 Non-Operational State: Operational 00:19:17.712 Entry Latency: Not Reported 00:19:17.712 Exit Latency: Not Reported 00:19:17.712 Relative Read Throughput: 0 00:19:17.712 Relative Read Latency: 0 00:19:17.712 Relative Write Throughput: 0 00:19:17.712 Relative Write Latency: 0 00:19:17.712 Idle Power: Not Reported 00:19:17.712 Active Power: Not Reported 00:19:17.712 Non-Operational Permissive Mode: Not Supported 00:19:17.712 00:19:17.712 Health Information 00:19:17.712 ================== 00:19:17.712 Critical Warnings: 00:19:17.712 Available Spare Space: OK 00:19:17.712 Temperature: OK 00:19:17.712 Device Reliability: OK 00:19:17.712 Read Only: No 00:19:17.712 Volatile Memory Backup: OK 00:19:17.712 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:17.712 Temperature Threshold: [2024-04-19 04:10:32.111688] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c80 length 0x40 lkey 0x183800 00:19:17.712 [2024-04-19 04:10:32.111694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.712 [2024-04-19 04:10:32.111709] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.712 [2024-04-19 04:10:32.111713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:17.712 [2024-04-19 04:10:32.111717] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183800 00:19:17.712 [2024-04-19 04:10:32.111735] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:17.712 [2024-04-19 04:10:32.111741] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45271 doesn't match qid 00:19:17.712 [2024-04-19 04:10:32.111752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:e790 p:0 m:0 dnr:0 00:19:17.712 [2024-04-19 04:10:32.111756] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45271 doesn't match qid 00:19:17.713 [2024-04-19 04:10:32.111762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:e790 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.111767] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45271 doesn't match qid 00:19:17.713 [2024-04-19 04:10:32.111772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:e790 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.111777] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45271 doesn't match qid 00:19:17.713 [2024-04-19 04:10:32.111782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:e790 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.111788] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.111794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.111810] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.111814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.111820] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.111825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.111829] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.111843] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.111847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.111851] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:17.713 [2024-04-19 04:10:32.111854] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:17.713 [2024-04-19 04:10:32.111858] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.111864] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.111869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.111887] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.111891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.111895] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.111902] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.111907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.111925] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.111929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.111933] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.111940] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.111945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.111964] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.111968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.111973] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.111980] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.111985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112001] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.112009] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112016] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112039] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.112048] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112054] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112075] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.112083] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112089] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112108] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.112116] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112122] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112144] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.112152] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112158] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112179] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.112189] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112195] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112217] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.112224] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112230] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112259] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.112267] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112273] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112294] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.112302] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112308] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112335] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.112343] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112349] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112368] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:17.713 [2024-04-19 04:10:32.112376] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112382] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.713 [2024-04-19 04:10:32.112387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.713 [2024-04-19 04:10:32.112410] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.713 [2024-04-19 04:10:32.112416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112420] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112426] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112450] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112458] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112464] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112491] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112499] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112505] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112527] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112534] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112541] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112563] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112571] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112577] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112603] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112610] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112617] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112639] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112647] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112653] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112678] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112685] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112692] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112714] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112722] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112728] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112755] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112763] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112769] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112791] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112799] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112805] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112824] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112831] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112838] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112861] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112869] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112875] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112895] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112903] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112909] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112932] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112940] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112946] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.112967] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.112971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.112975] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112981] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.112987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.113006] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.113010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.113013] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.113020] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.113025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.113038] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.113042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.113046] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.113052] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.714 [2024-04-19 04:10:32.113059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.714 [2024-04-19 04:10:32.113076] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.714 [2024-04-19 04:10:32.113080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:17.714 [2024-04-19 04:10:32.113084] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113090] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113110] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113118] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113124] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113151] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113159] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113165] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113189] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113197] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113203] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113226] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113234] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113240] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113259] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113266] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113273] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113298] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113306] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113312] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113332] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113340] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113346] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113369] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113377] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113383] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113405] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113413] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113419] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113439] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113447] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113453] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113478] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113486] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113493] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113518] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113525] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113532] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113553] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113561] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113567] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113591] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113599] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113605] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113625] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.715 [2024-04-19 04:10:32.113629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:19:17.715 [2024-04-19 04:10:32.113633] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113639] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.715 [2024-04-19 04:10:32.113644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.715 [2024-04-19 04:10:32.113664] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.113668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.113672] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113678] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.113700] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.113703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.113707] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113715] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.113742] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.113746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.113750] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113756] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.113774] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.113778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.113782] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113788] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.113808] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.113812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.113816] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113822] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.113844] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.113847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.113851] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113857] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.113879] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.113883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.113887] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113893] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.113917] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.113921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.113926] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113932] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.113958] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.113962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.113966] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113972] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.113977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.113992] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.113996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.114000] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114006] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.114026] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.114030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.114034] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114040] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.114060] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.114064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.114068] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114074] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.114099] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.114103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.114107] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114113] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.114134] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.114137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.114142] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114149] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.114169] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.114172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.114176] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114182] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.114207] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.114211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.114215] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114221] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.114241] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.114245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.114249] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114255] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.114275] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.114279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.114283] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114289] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.716 [2024-04-19 04:10:32.114294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.716 [2024-04-19 04:10:32.114315] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.716 [2024-04-19 04:10:32.114318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:19:17.716 [2024-04-19 04:10:32.114322] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x183800 00:19:17.717 [2024-04-19 04:10:32.114328] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.717 [2024-04-19 04:10:32.114334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.717 [2024-04-19 04:10:32.114347] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.717 [2024-04-19 04:10:32.114352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:19:17.717 [2024-04-19 04:10:32.114356] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x183800 00:19:17.717 [2024-04-19 04:10:32.114362] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.717 [2024-04-19 04:10:32.114368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.717 [2024-04-19 04:10:32.114388] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.717 [2024-04-19 04:10:32.114392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:19:17.717 [2024-04-19 04:10:32.114396] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x183800 00:19:17.717 [2024-04-19 04:10:32.118407] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x183800 00:19:17.717 [2024-04-19 04:10:32.118414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:17.717 [2024-04-19 04:10:32.118432] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:19:17.717 [2024-04-19 04:10:32.118435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0000 p:0 m:0 dnr:0 00:19:17.717 [2024-04-19 04:10:32.118439] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x183800 00:19:17.717 [2024-04-19 04:10:32.118444] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:19:17.717 0 Kelvin (-273 Celsius) 00:19:17.717 Available Spare: 0% 00:19:17.717 Available Spare Threshold: 0% 00:19:17.717 Life Percentage Used: 0% 00:19:17.717 Data Units Read: 0 00:19:17.717 Data Units Written: 0 00:19:17.717 Host Read Commands: 0 00:19:17.717 Host Write Commands: 0 00:19:17.717 Controller Busy Time: 0 minutes 00:19:17.717 Power Cycles: 0 00:19:17.717 Power On Hours: 0 hours 00:19:17.717 Unsafe Shutdowns: 0 00:19:17.717 Unrecoverable Media Errors: 0 00:19:17.717 Lifetime Error Log Entries: 0 00:19:17.717 Warning Temperature Time: 0 minutes 00:19:17.717 Critical Temperature Time: 0 minutes 00:19:17.717 00:19:17.717 Number of Queues 00:19:17.717 ================ 00:19:17.717 Number of I/O Submission Queues: 127 00:19:17.717 Number of I/O Completion Queues: 127 00:19:17.717 00:19:17.717 Active Namespaces 00:19:17.717 ================= 00:19:17.717 Namespace ID:1 00:19:17.717 Error Recovery Timeout: Unlimited 00:19:17.717 Command Set Identifier: NVM (00h) 00:19:17.717 Deallocate: Supported 00:19:17.717 Deallocated/Unwritten Error: Not Supported 00:19:17.717 Deallocated Read Value: Unknown 00:19:17.717 Deallocate in Write Zeroes: Not Supported 00:19:17.717 Deallocated Guard Field: 0xFFFF 00:19:17.717 Flush: Supported 00:19:17.717 Reservation: Supported 00:19:17.717 Namespace Sharing Capabilities: Multiple Controllers 00:19:17.717 Size (in LBAs): 131072 (0GiB) 00:19:17.717 Capacity (in LBAs): 131072 (0GiB) 00:19:17.717 Utilization (in LBAs): 131072 (0GiB) 00:19:17.717 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:17.717 EUI64: ABCDEF0123456789 00:19:17.717 UUID: ae20b4e9-e2c1-4faf-b5b8-c210d11ec713 00:19:17.717 Thin Provisioning: Not Supported 00:19:17.717 Per-NS Atomic Units: Yes 00:19:17.717 Atomic Boundary Size (Normal): 0 00:19:17.717 Atomic Boundary Size (PFail): 0 00:19:17.717 Atomic Boundary Offset: 0 00:19:17.717 Maximum Single Source Range Length: 65535 00:19:17.717 Maximum Copy Length: 65535 00:19:17.717 Maximum Source Range Count: 1 00:19:17.717 NGUID/EUI64 Never Reused: No 00:19:17.717 Namespace Write Protected: No 00:19:17.717 Number of LBA Formats: 1 00:19:17.717 Current LBA Format: LBA Format #00 00:19:17.717 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:17.717 00:19:17.717 04:10:32 -- host/identify.sh@51 -- # sync 00:19:17.717 04:10:32 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:17.717 04:10:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.717 04:10:32 -- common/autotest_common.sh@10 -- # set +x 00:19:17.717 04:10:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.717 04:10:32 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:17.717 04:10:32 -- host/identify.sh@56 -- # nvmftestfini 00:19:17.717 04:10:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:17.717 04:10:32 -- nvmf/common.sh@117 -- # sync 00:19:17.717 04:10:32 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:17.717 04:10:32 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:17.717 04:10:32 -- nvmf/common.sh@120 -- # set +e 00:19:17.717 04:10:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:17.717 04:10:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:17.717 rmmod nvme_rdma 00:19:17.717 rmmod nvme_fabrics 00:19:17.717 04:10:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:17.717 04:10:32 -- nvmf/common.sh@124 -- # set -e 00:19:17.717 04:10:32 -- nvmf/common.sh@125 -- # return 0 00:19:17.717 04:10:32 -- nvmf/common.sh@478 -- # '[' -n 359203 ']' 00:19:17.717 04:10:32 -- nvmf/common.sh@479 -- # killprocess 359203 00:19:17.717 04:10:32 -- common/autotest_common.sh@936 -- # '[' -z 359203 ']' 00:19:17.717 04:10:32 -- common/autotest_common.sh@940 -- # kill -0 359203 00:19:17.717 04:10:32 -- common/autotest_common.sh@941 -- # uname 00:19:17.717 04:10:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:17.717 04:10:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 359203 00:19:17.976 04:10:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:17.976 04:10:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:17.976 04:10:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 359203' 00:19:17.976 killing process with pid 359203 00:19:17.976 04:10:32 -- common/autotest_common.sh@955 -- # kill 359203 00:19:17.976 [2024-04-19 04:10:32.257023] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:17.976 04:10:32 -- common/autotest_common.sh@960 -- # wait 359203 00:19:18.234 04:10:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:18.234 04:10:32 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:18.234 00:19:18.234 real 0m7.195s 00:19:18.234 user 0m7.699s 00:19:18.234 sys 0m4.432s 00:19:18.234 04:10:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:18.234 04:10:32 -- common/autotest_common.sh@10 -- # set +x 00:19:18.234 ************************************ 00:19:18.234 END TEST nvmf_identify 00:19:18.234 ************************************ 00:19:18.234 04:10:32 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:19:18.234 04:10:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:18.234 04:10:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:18.234 04:10:32 -- common/autotest_common.sh@10 -- # set +x 00:19:18.234 ************************************ 00:19:18.234 START TEST nvmf_perf 00:19:18.235 ************************************ 00:19:18.235 04:10:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:19:18.493 * Looking for test storage... 00:19:18.493 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:18.493 04:10:32 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.493 04:10:32 -- nvmf/common.sh@7 -- # uname -s 00:19:18.493 04:10:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.493 04:10:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.493 04:10:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.493 04:10:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.493 04:10:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.493 04:10:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.493 04:10:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.493 04:10:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.493 04:10:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.493 04:10:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.493 04:10:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:19:18.493 04:10:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:19:18.493 04:10:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.493 04:10:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.493 04:10:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.493 04:10:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.493 04:10:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:18.493 04:10:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.493 04:10:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.493 04:10:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.494 04:10:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.494 04:10:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.494 04:10:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.494 04:10:32 -- paths/export.sh@5 -- # export PATH 00:19:18.494 04:10:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.494 04:10:32 -- nvmf/common.sh@47 -- # : 0 00:19:18.494 04:10:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:18.494 04:10:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:18.494 04:10:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.494 04:10:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.494 04:10:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.494 04:10:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:18.494 04:10:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:18.494 04:10:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:18.494 04:10:32 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:18.494 04:10:32 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:18.494 04:10:32 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:18.494 04:10:32 -- host/perf.sh@17 -- # nvmftestinit 00:19:18.494 04:10:32 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:18.494 04:10:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.494 04:10:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:18.494 04:10:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:18.494 04:10:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:18.494 04:10:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.494 04:10:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.494 04:10:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.494 04:10:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:18.494 04:10:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:18.494 04:10:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:18.494 04:10:32 -- common/autotest_common.sh@10 -- # set +x 00:19:23.762 04:10:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:23.762 04:10:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.762 04:10:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.762 04:10:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.762 04:10:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.762 04:10:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.762 04:10:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.762 04:10:37 -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.762 04:10:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.762 04:10:37 -- nvmf/common.sh@296 -- # e810=() 00:19:23.762 04:10:37 -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.762 04:10:37 -- nvmf/common.sh@297 -- # x722=() 00:19:23.762 04:10:37 -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.762 04:10:37 -- nvmf/common.sh@298 -- # mlx=() 00:19:23.762 04:10:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.762 04:10:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.762 04:10:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.762 04:10:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.762 04:10:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.762 04:10:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.762 04:10:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.762 04:10:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.762 04:10:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.762 04:10:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.762 04:10:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.762 04:10:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.762 04:10:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.762 04:10:37 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:23.762 04:10:37 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:23.762 04:10:37 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:23.762 04:10:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.762 04:10:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.762 04:10:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:23.762 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:23.762 04:10:37 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.762 04:10:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.762 04:10:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:23.762 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:23.762 04:10:37 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.762 04:10:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.762 04:10:37 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:23.762 04:10:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.762 04:10:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.762 04:10:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:23.762 04:10:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.762 04:10:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:23.762 Found net devices under 0000:18:00.0: mlx_0_0 00:19:23.762 04:10:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.762 04:10:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.762 04:10:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.762 04:10:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:23.762 04:10:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.762 04:10:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:23.762 Found net devices under 0000:18:00.1: mlx_0_1 00:19:23.762 04:10:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.763 04:10:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:23.763 04:10:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:23.763 04:10:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:23.763 04:10:37 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:23.763 04:10:37 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:23.763 04:10:37 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:23.763 04:10:37 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:23.763 04:10:37 -- nvmf/common.sh@58 -- # uname 00:19:23.763 04:10:37 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:23.763 04:10:37 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:23.763 04:10:37 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:23.763 04:10:37 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:23.763 04:10:37 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:23.763 04:10:37 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:23.763 04:10:37 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:23.763 04:10:37 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:23.763 04:10:37 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:23.763 04:10:37 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:23.763 04:10:37 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:23.763 04:10:37 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.763 04:10:37 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:23.763 04:10:37 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:23.763 04:10:37 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.763 04:10:38 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:23.763 04:10:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:23.763 04:10:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.763 04:10:38 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.763 04:10:38 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:23.763 04:10:38 -- nvmf/common.sh@105 -- # continue 2 00:19:23.763 04:10:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:23.763 04:10:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.763 04:10:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.763 04:10:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.763 04:10:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.763 04:10:38 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:23.763 04:10:38 -- nvmf/common.sh@105 -- # continue 2 00:19:23.763 04:10:38 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:23.763 04:10:38 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:23.763 04:10:38 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:23.763 04:10:38 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:23.763 04:10:38 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:23.763 04:10:38 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:23.763 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.763 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:19:23.763 altname enp24s0f0np0 00:19:23.763 altname ens785f0np0 00:19:23.763 inet 192.168.100.8/24 scope global mlx_0_0 00:19:23.763 valid_lft forever preferred_lft forever 00:19:23.763 04:10:38 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:23.763 04:10:38 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:23.763 04:10:38 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:23.763 04:10:38 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:23.763 04:10:38 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:23.763 04:10:38 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:23.763 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.763 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:19:23.763 altname enp24s0f1np1 00:19:23.763 altname ens785f1np1 00:19:23.763 inet 192.168.100.9/24 scope global mlx_0_1 00:19:23.763 valid_lft forever preferred_lft forever 00:19:23.763 04:10:38 -- nvmf/common.sh@411 -- # return 0 00:19:23.763 04:10:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:23.763 04:10:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:23.763 04:10:38 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:23.763 04:10:38 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:23.763 04:10:38 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:23.763 04:10:38 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.763 04:10:38 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:23.763 04:10:38 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:23.763 04:10:38 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.763 04:10:38 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:23.763 04:10:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:23.763 04:10:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.763 04:10:38 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.763 04:10:38 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:23.763 04:10:38 -- nvmf/common.sh@105 -- # continue 2 00:19:23.763 04:10:38 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:23.763 04:10:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.763 04:10:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.763 04:10:38 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.763 04:10:38 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.763 04:10:38 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:23.763 04:10:38 -- nvmf/common.sh@105 -- # continue 2 00:19:23.763 04:10:38 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:23.763 04:10:38 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:23.763 04:10:38 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:23.763 04:10:38 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:23.763 04:10:38 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:23.763 04:10:38 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:23.763 04:10:38 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:23.763 04:10:38 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:23.763 192.168.100.9' 00:19:23.763 04:10:38 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:23.763 192.168.100.9' 00:19:23.763 04:10:38 -- nvmf/common.sh@446 -- # head -n 1 00:19:23.763 04:10:38 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:23.763 04:10:38 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:23.763 192.168.100.9' 00:19:23.763 04:10:38 -- nvmf/common.sh@447 -- # tail -n +2 00:19:23.763 04:10:38 -- nvmf/common.sh@447 -- # head -n 1 00:19:23.763 04:10:38 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:23.763 04:10:38 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:23.763 04:10:38 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:23.763 04:10:38 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:23.763 04:10:38 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:23.763 04:10:38 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:23.763 04:10:38 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:23.763 04:10:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:23.763 04:10:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:23.763 04:10:38 -- common/autotest_common.sh@10 -- # set +x 00:19:23.763 04:10:38 -- nvmf/common.sh@470 -- # nvmfpid=362744 00:19:23.763 04:10:38 -- nvmf/common.sh@471 -- # waitforlisten 362744 00:19:23.763 04:10:38 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:23.763 04:10:38 -- common/autotest_common.sh@817 -- # '[' -z 362744 ']' 00:19:23.763 04:10:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.763 04:10:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:23.763 04:10:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.763 04:10:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:23.763 04:10:38 -- common/autotest_common.sh@10 -- # set +x 00:19:23.763 [2024-04-19 04:10:38.183863] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:23.763 [2024-04-19 04:10:38.183903] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.763 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.763 [2024-04-19 04:10:38.234964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.022 [2024-04-19 04:10:38.302382] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.022 [2024-04-19 04:10:38.302420] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.022 [2024-04-19 04:10:38.302426] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.022 [2024-04-19 04:10:38.302431] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.022 [2024-04-19 04:10:38.302436] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.022 [2024-04-19 04:10:38.302490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.022 [2024-04-19 04:10:38.302583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.022 [2024-04-19 04:10:38.302648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.022 [2024-04-19 04:10:38.302649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.663 04:10:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:24.663 04:10:38 -- common/autotest_common.sh@850 -- # return 0 00:19:24.663 04:10:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:24.663 04:10:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:24.663 04:10:38 -- common/autotest_common.sh@10 -- # set +x 00:19:24.663 04:10:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.663 04:10:38 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:24.663 04:10:38 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:19:27.943 04:10:41 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:19:27.943 04:10:41 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:27.943 04:10:42 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:19:27.943 04:10:42 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:27.943 04:10:42 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:27.943 04:10:42 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:19:27.943 04:10:42 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:27.943 04:10:42 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:19:27.943 04:10:42 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:19:27.943 [2024-04-19 04:10:42.472155] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:19:28.201 [2024-04-19 04:10:42.490855] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x62fb40/0x77d840) succeed. 00:19:28.201 [2024-04-19 04:10:42.500089] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x631130/0x63d640) succeed. 00:19:28.201 04:10:42 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:28.459 04:10:42 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:28.459 04:10:42 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:28.459 04:10:42 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:28.459 04:10:42 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:28.717 04:10:43 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:28.975 [2024-04-19 04:10:43.272908] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:28.975 04:10:43 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:28.975 04:10:43 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:19:28.975 04:10:43 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:19:28.975 04:10:43 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:28.975 04:10:43 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:19:30.351 Initializing NVMe Controllers 00:19:30.351 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:19:30.351 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:19:30.351 Initialization complete. Launching workers. 00:19:30.351 ======================================================== 00:19:30.351 Latency(us) 00:19:30.351 Device Information : IOPS MiB/s Average min max 00:19:30.351 PCIE (0000:d8:00.0) NSID 1 from core 0: 106266.09 415.10 300.70 33.20 4267.33 00:19:30.351 ======================================================== 00:19:30.351 Total : 106266.09 415.10 300.70 33.20 4267.33 00:19:30.351 00:19:30.351 04:10:44 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:30.351 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.634 Initializing NVMe Controllers 00:19:33.634 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:33.634 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:33.634 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:33.634 Initialization complete. Launching workers. 00:19:33.634 ======================================================== 00:19:33.634 Latency(us) 00:19:33.634 Device Information : IOPS MiB/s Average min max 00:19:33.634 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7274.54 28.42 136.18 45.67 4077.20 00:19:33.634 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5580.58 21.80 178.29 64.30 4096.33 00:19:33.634 ======================================================== 00:19:33.634 Total : 12855.12 50.22 154.46 45.67 4096.33 00:19:33.634 00:19:33.634 04:10:47 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:33.634 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.920 Initializing NVMe Controllers 00:19:36.920 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:36.920 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:36.920 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:36.920 Initialization complete. Launching workers. 00:19:36.920 ======================================================== 00:19:36.920 Latency(us) 00:19:36.920 Device Information : IOPS MiB/s Average min max 00:19:36.920 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19892.98 77.71 1608.99 463.83 8093.22 00:19:36.920 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3968.00 15.50 8100.26 6598.03 16148.63 00:19:36.920 ======================================================== 00:19:36.920 Total : 23860.98 93.21 2688.46 463.83 16148.63 00:19:36.920 00:19:36.920 04:10:51 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:19:36.920 04:10:51 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:36.920 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.196 Initializing NVMe Controllers 00:19:42.196 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:42.196 Controller IO queue size 128, less than required. 00:19:42.196 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.196 Controller IO queue size 128, less than required. 00:19:42.196 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.196 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:42.196 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:42.196 Initialization complete. Launching workers. 00:19:42.196 ======================================================== 00:19:42.196 Latency(us) 00:19:42.196 Device Information : IOPS MiB/s Average min max 00:19:42.196 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3745.02 936.25 34334.08 15477.28 84626.14 00:19:42.196 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3804.00 951.00 33453.48 15458.60 55426.47 00:19:42.196 ======================================================== 00:19:42.196 Total : 7549.02 1887.25 33890.34 15458.60 84626.14 00:19:42.196 00:19:42.196 04:10:55 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:19:42.196 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.196 No valid NVMe controllers or AIO or URING devices found 00:19:42.196 Initializing NVMe Controllers 00:19:42.196 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:42.196 Controller IO queue size 128, less than required. 00:19:42.196 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.196 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:42.196 Controller IO queue size 128, less than required. 00:19:42.196 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:42.196 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:19:42.196 WARNING: Some requested NVMe devices were skipped 00:19:42.196 04:10:56 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:19:42.196 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.387 Initializing NVMe Controllers 00:19:46.387 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:46.387 Controller IO queue size 128, less than required. 00:19:46.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:46.387 Controller IO queue size 128, less than required. 00:19:46.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:46.387 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:46.387 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:46.387 Initialization complete. Launching workers. 00:19:46.387 00:19:46.387 ==================== 00:19:46.387 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:46.387 RDMA transport: 00:19:46.387 dev name: mlx5_0 00:19:46.387 polls: 434299 00:19:46.387 idle_polls: 430140 00:19:46.387 completions: 47978 00:19:46.387 queued_requests: 1 00:19:46.387 total_send_wrs: 23989 00:19:46.387 send_doorbell_updates: 3914 00:19:46.387 total_recv_wrs: 24116 00:19:46.387 recv_doorbell_updates: 3919 00:19:46.387 --------------------------------- 00:19:46.387 00:19:46.387 ==================== 00:19:46.387 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:46.387 RDMA transport: 00:19:46.387 dev name: mlx5_0 00:19:46.387 polls: 438874 00:19:46.387 idle_polls: 438599 00:19:46.387 completions: 21318 00:19:46.387 queued_requests: 1 00:19:46.387 total_send_wrs: 10659 00:19:46.387 send_doorbell_updates: 255 00:19:46.387 total_recv_wrs: 10786 00:19:46.387 recv_doorbell_updates: 256 00:19:46.387 --------------------------------- 00:19:46.387 ======================================================== 00:19:46.387 Latency(us) 00:19:46.387 Device Information : IOPS MiB/s Average min max 00:19:46.387 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5997.00 1499.25 21381.96 10285.74 63052.74 00:19:46.387 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2664.50 666.12 47896.22 24309.51 71715.10 00:19:46.387 ======================================================== 00:19:46.387 Total : 8661.50 2165.38 29538.43 10285.74 71715.10 00:19:46.387 00:19:46.387 04:11:00 -- host/perf.sh@66 -- # sync 00:19:46.387 04:11:00 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.387 04:11:00 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:46.387 04:11:00 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:46.387 04:11:00 -- host/perf.sh@114 -- # nvmftestfini 00:19:46.387 04:11:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:46.387 04:11:00 -- nvmf/common.sh@117 -- # sync 00:19:46.387 04:11:00 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:46.387 04:11:00 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:46.387 04:11:00 -- nvmf/common.sh@120 -- # set +e 00:19:46.387 04:11:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.387 04:11:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:46.387 rmmod nvme_rdma 00:19:46.387 rmmod nvme_fabrics 00:19:46.387 04:11:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.387 04:11:00 -- nvmf/common.sh@124 -- # set -e 00:19:46.387 04:11:00 -- nvmf/common.sh@125 -- # return 0 00:19:46.387 04:11:00 -- nvmf/common.sh@478 -- # '[' -n 362744 ']' 00:19:46.387 04:11:00 -- nvmf/common.sh@479 -- # killprocess 362744 00:19:46.387 04:11:00 -- common/autotest_common.sh@936 -- # '[' -z 362744 ']' 00:19:46.387 04:11:00 -- common/autotest_common.sh@940 -- # kill -0 362744 00:19:46.387 04:11:00 -- common/autotest_common.sh@941 -- # uname 00:19:46.387 04:11:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.387 04:11:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 362744 00:19:46.387 04:11:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:46.387 04:11:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:46.387 04:11:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 362744' 00:19:46.387 killing process with pid 362744 00:19:46.387 04:11:00 -- common/autotest_common.sh@955 -- # kill 362744 00:19:46.387 04:11:00 -- common/autotest_common.sh@960 -- # wait 362744 00:19:50.576 04:11:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:50.576 04:11:04 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:50.576 00:19:50.576 real 0m31.824s 00:19:50.576 user 1m46.642s 00:19:50.576 sys 0m5.036s 00:19:50.576 04:11:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:50.576 04:11:04 -- common/autotest_common.sh@10 -- # set +x 00:19:50.576 ************************************ 00:19:50.576 END TEST nvmf_perf 00:19:50.576 ************************************ 00:19:50.576 04:11:04 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:19:50.576 04:11:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:50.576 04:11:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:50.576 04:11:04 -- common/autotest_common.sh@10 -- # set +x 00:19:50.576 ************************************ 00:19:50.576 START TEST nvmf_fio_host 00:19:50.576 ************************************ 00:19:50.576 04:11:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:19:50.576 * Looking for test storage... 00:19:50.576 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:50.576 04:11:04 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:50.576 04:11:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.576 04:11:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.576 04:11:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.576 04:11:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.576 04:11:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.576 04:11:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.576 04:11:04 -- paths/export.sh@5 -- # export PATH 00:19:50.576 04:11:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.576 04:11:04 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.576 04:11:04 -- nvmf/common.sh@7 -- # uname -s 00:19:50.576 04:11:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.576 04:11:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.576 04:11:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.576 04:11:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.576 04:11:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.576 04:11:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.576 04:11:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.576 04:11:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.576 04:11:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.576 04:11:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.576 04:11:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:19:50.576 04:11:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:19:50.576 04:11:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.576 04:11:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.576 04:11:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.576 04:11:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.576 04:11:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:50.576 04:11:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.577 04:11:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.577 04:11:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.577 04:11:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.577 04:11:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.577 04:11:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.577 04:11:04 -- paths/export.sh@5 -- # export PATH 00:19:50.577 04:11:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.577 04:11:04 -- nvmf/common.sh@47 -- # : 0 00:19:50.577 04:11:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:50.577 04:11:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:50.577 04:11:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.577 04:11:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.577 04:11:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.577 04:11:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:50.577 04:11:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:50.577 04:11:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:50.577 04:11:04 -- host/fio.sh@12 -- # nvmftestinit 00:19:50.577 04:11:04 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:50.577 04:11:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.577 04:11:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:50.577 04:11:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:50.577 04:11:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:50.577 04:11:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.577 04:11:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.577 04:11:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.577 04:11:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:50.577 04:11:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:50.577 04:11:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:50.577 04:11:04 -- common/autotest_common.sh@10 -- # set +x 00:19:55.850 04:11:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:55.850 04:11:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.850 04:11:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.850 04:11:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.850 04:11:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.850 04:11:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.850 04:11:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.850 04:11:10 -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.850 04:11:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.850 04:11:10 -- nvmf/common.sh@296 -- # e810=() 00:19:55.850 04:11:10 -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.850 04:11:10 -- nvmf/common.sh@297 -- # x722=() 00:19:55.850 04:11:10 -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.850 04:11:10 -- nvmf/common.sh@298 -- # mlx=() 00:19:55.850 04:11:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.850 04:11:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.850 04:11:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.850 04:11:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.850 04:11:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.850 04:11:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.850 04:11:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.850 04:11:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.850 04:11:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.850 04:11:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.850 04:11:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.850 04:11:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.850 04:11:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.850 04:11:10 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:55.850 04:11:10 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:55.850 04:11:10 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:55.850 04:11:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.850 04:11:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.850 04:11:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:55.850 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:55.850 04:11:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.850 04:11:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.850 04:11:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:55.850 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:55.850 04:11:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.850 04:11:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.850 04:11:10 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.850 04:11:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.850 04:11:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:55.850 04:11:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.850 04:11:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:55.850 Found net devices under 0000:18:00.0: mlx_0_0 00:19:55.850 04:11:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.850 04:11:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.850 04:11:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.850 04:11:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:55.850 04:11:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.850 04:11:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:55.850 Found net devices under 0000:18:00.1: mlx_0_1 00:19:55.850 04:11:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.850 04:11:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:55.850 04:11:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:55.850 04:11:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:55.850 04:11:10 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:55.850 04:11:10 -- nvmf/common.sh@58 -- # uname 00:19:55.850 04:11:10 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:55.850 04:11:10 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:55.850 04:11:10 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:55.850 04:11:10 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:55.850 04:11:10 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:55.850 04:11:10 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:55.850 04:11:10 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:55.850 04:11:10 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:55.850 04:11:10 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:55.850 04:11:10 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:55.850 04:11:10 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:55.850 04:11:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.850 04:11:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:55.850 04:11:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:55.850 04:11:10 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.850 04:11:10 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:55.850 04:11:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:55.850 04:11:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.850 04:11:10 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:55.850 04:11:10 -- nvmf/common.sh@105 -- # continue 2 00:19:55.850 04:11:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:55.850 04:11:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.850 04:11:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.850 04:11:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:55.850 04:11:10 -- nvmf/common.sh@105 -- # continue 2 00:19:55.850 04:11:10 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:55.850 04:11:10 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:55.850 04:11:10 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:55.850 04:11:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:55.850 04:11:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:55.850 04:11:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:55.850 04:11:10 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:55.850 04:11:10 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:55.850 04:11:10 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:55.850 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.850 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:19:55.850 altname enp24s0f0np0 00:19:55.850 altname ens785f0np0 00:19:55.850 inet 192.168.100.8/24 scope global mlx_0_0 00:19:55.850 valid_lft forever preferred_lft forever 00:19:55.851 04:11:10 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:55.851 04:11:10 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:55.851 04:11:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:55.851 04:11:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:55.851 04:11:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:55.851 04:11:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:55.851 04:11:10 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:55.851 04:11:10 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:55.851 04:11:10 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:55.851 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.851 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:19:55.851 altname enp24s0f1np1 00:19:55.851 altname ens785f1np1 00:19:55.851 inet 192.168.100.9/24 scope global mlx_0_1 00:19:55.851 valid_lft forever preferred_lft forever 00:19:55.851 04:11:10 -- nvmf/common.sh@411 -- # return 0 00:19:55.851 04:11:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:55.851 04:11:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:55.851 04:11:10 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:55.851 04:11:10 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:55.851 04:11:10 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:55.851 04:11:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.851 04:11:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:55.851 04:11:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:55.851 04:11:10 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.851 04:11:10 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:55.851 04:11:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:55.851 04:11:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.851 04:11:10 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.851 04:11:10 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:55.851 04:11:10 -- nvmf/common.sh@105 -- # continue 2 00:19:55.851 04:11:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:55.851 04:11:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.851 04:11:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.851 04:11:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.851 04:11:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.851 04:11:10 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:55.851 04:11:10 -- nvmf/common.sh@105 -- # continue 2 00:19:55.851 04:11:10 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:55.851 04:11:10 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:55.851 04:11:10 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:55.851 04:11:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:55.851 04:11:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:55.851 04:11:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:55.851 04:11:10 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:55.851 04:11:10 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:55.851 04:11:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:55.851 04:11:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:55.851 04:11:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:55.851 04:11:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:55.851 04:11:10 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:55.851 192.168.100.9' 00:19:55.851 04:11:10 -- nvmf/common.sh@446 -- # head -n 1 00:19:55.851 04:11:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:55.851 192.168.100.9' 00:19:55.851 04:11:10 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:55.851 04:11:10 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:55.851 192.168.100.9' 00:19:55.851 04:11:10 -- nvmf/common.sh@447 -- # tail -n +2 00:19:55.851 04:11:10 -- nvmf/common.sh@447 -- # head -n 1 00:19:55.851 04:11:10 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:55.851 04:11:10 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:55.851 04:11:10 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:55.851 04:11:10 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:55.851 04:11:10 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:55.851 04:11:10 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:55.851 04:11:10 -- host/fio.sh@14 -- # [[ y != y ]] 00:19:55.851 04:11:10 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:19:55.851 04:11:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:55.851 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.851 04:11:10 -- host/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:55.851 04:11:10 -- host/fio.sh@22 -- # nvmfpid=370754 00:19:55.851 04:11:10 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:55.851 04:11:10 -- host/fio.sh@26 -- # waitforlisten 370754 00:19:55.851 04:11:10 -- common/autotest_common.sh@817 -- # '[' -z 370754 ']' 00:19:55.851 04:11:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.851 04:11:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:55.851 04:11:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.851 04:11:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:55.851 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:56.110 [2024-04-19 04:11:10.397759] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:56.110 [2024-04-19 04:11:10.397798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.110 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.110 [2024-04-19 04:11:10.447879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.110 [2024-04-19 04:11:10.520563] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.110 [2024-04-19 04:11:10.520598] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.110 [2024-04-19 04:11:10.520604] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.110 [2024-04-19 04:11:10.520610] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.110 [2024-04-19 04:11:10.520615] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.110 [2024-04-19 04:11:10.520649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.110 [2024-04-19 04:11:10.520730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.110 [2024-04-19 04:11:10.520755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.110 [2024-04-19 04:11:10.520757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.675 04:11:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:56.675 04:11:11 -- common/autotest_common.sh@850 -- # return 0 00:19:56.675 04:11:11 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:56.675 04:11:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.675 04:11:11 -- common/autotest_common.sh@10 -- # set +x 00:19:56.932 [2024-04-19 04:11:11.223489] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe4e6c0/0xe52bb0) succeed. 00:19:56.932 [2024-04-19 04:11:11.232871] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe4fcb0/0xe94240) succeed. 00:19:56.932 04:11:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.932 04:11:11 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:19:56.932 04:11:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:56.932 04:11:11 -- common/autotest_common.sh@10 -- # set +x 00:19:56.932 04:11:11 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:56.932 04:11:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.932 04:11:11 -- common/autotest_common.sh@10 -- # set +x 00:19:56.932 Malloc1 00:19:56.932 04:11:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.932 04:11:11 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:56.932 04:11:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.932 04:11:11 -- common/autotest_common.sh@10 -- # set +x 00:19:56.932 04:11:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.932 04:11:11 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:56.932 04:11:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.932 04:11:11 -- common/autotest_common.sh@10 -- # set +x 00:19:56.932 04:11:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.933 04:11:11 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:56.933 04:11:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.933 04:11:11 -- common/autotest_common.sh@10 -- # set +x 00:19:56.933 [2024-04-19 04:11:11.427261] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:56.933 04:11:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.933 04:11:11 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:56.933 04:11:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.933 04:11:11 -- common/autotest_common.sh@10 -- # set +x 00:19:56.933 04:11:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.933 04:11:11 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:19:56.933 04:11:11 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:56.933 04:11:11 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:56.933 04:11:11 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:56.933 04:11:11 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:56.933 04:11:11 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:56.933 04:11:11 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:56.933 04:11:11 -- common/autotest_common.sh@1327 -- # shift 00:19:56.933 04:11:11 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:56.933 04:11:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:56.933 04:11:11 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:56.933 04:11:11 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:56.933 04:11:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:57.219 04:11:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:57.219 04:11:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:57.219 04:11:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:57.219 04:11:11 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:19:57.219 04:11:11 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:57.219 04:11:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:57.219 04:11:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:57.219 04:11:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:57.219 04:11:11 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:19:57.219 04:11:11 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:19:57.484 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:57.484 fio-3.35 00:19:57.484 Starting 1 thread 00:19:57.484 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.009 00:20:00.009 test: (groupid=0, jobs=1): err= 0: pid=371175: Fri Apr 19 04:11:14 2024 00:20:00.009 read: IOPS=18.8k, BW=73.5MiB/s (77.1MB/s)(147MiB/2003msec) 00:20:00.009 slat (nsec): min=1287, max=31643, avg=1395.59, stdev=353.63 00:20:00.009 clat (usec): min=1603, max=6069, avg=3374.93, stdev=70.27 00:20:00.009 lat (usec): min=1617, max=6071, avg=3376.32, stdev=70.17 00:20:00.009 clat percentiles (usec): 00:20:00.009 | 1.00th=[ 3326], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3359], 00:20:00.009 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3392], 00:20:00.009 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3392], 95.00th=[ 3392], 00:20:00.009 | 99.00th=[ 3425], 99.50th=[ 3425], 99.90th=[ 4293], 99.95th=[ 5145], 00:20:00.009 | 99.99th=[ 5997] 00:20:00.009 bw ( KiB/s): min=73800, max=75920, per=100.00%, avg=75292.00, stdev=1009.70, samples=4 00:20:00.009 iops : min=18450, max=18980, avg=18823.00, stdev=252.43, samples=4 00:20:00.010 write: IOPS=18.8k, BW=73.6MiB/s (77.2MB/s)(147MiB/2003msec); 0 zone resets 00:20:00.010 slat (nsec): min=1335, max=22161, avg=1697.92, stdev=429.55 00:20:00.010 clat (usec): min=2335, max=6057, avg=3373.81, stdev=77.98 00:20:00.010 lat (usec): min=2346, max=6059, avg=3375.51, stdev=77.90 00:20:00.010 clat percentiles (usec): 00:20:00.010 | 1.00th=[ 3326], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3359], 00:20:00.010 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3392], 00:20:00.010 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3392], 95.00th=[ 3392], 00:20:00.010 | 99.00th=[ 3425], 99.50th=[ 3425], 99.90th=[ 4359], 99.95th=[ 5604], 00:20:00.010 | 99.99th=[ 6063] 00:20:00.010 bw ( KiB/s): min=73808, max=75976, per=99.96%, avg=75314.00, stdev=1017.95, samples=4 00:20:00.010 iops : min=18452, max=18994, avg=18828.50, stdev=254.49, samples=4 00:20:00.010 lat (msec) : 2=0.01%, 4=99.88%, 10=0.11% 00:20:00.010 cpu : usr=99.50%, sys=0.10%, ctx=16, majf=0, minf=3 00:20:00.010 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:00.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.010 issued rwts: total=37702,37728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.010 00:20:00.010 Run status group 0 (all jobs): 00:20:00.010 READ: bw=73.5MiB/s (77.1MB/s), 73.5MiB/s-73.5MiB/s (77.1MB/s-77.1MB/s), io=147MiB (154MB), run=2003-2003msec 00:20:00.010 WRITE: bw=73.6MiB/s (77.2MB/s), 73.6MiB/s-73.6MiB/s (77.2MB/s-77.2MB/s), io=147MiB (155MB), run=2003-2003msec 00:20:00.010 04:11:14 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:00.010 04:11:14 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:00.010 04:11:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:00.010 04:11:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:00.010 04:11:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:00.010 04:11:14 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:00.010 04:11:14 -- common/autotest_common.sh@1327 -- # shift 00:20:00.010 04:11:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:00.010 04:11:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:00.010 04:11:14 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:00.010 04:11:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:00.010 04:11:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:00.010 04:11:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:00.010 04:11:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:00.010 04:11:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:00.010 04:11:14 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:20:00.010 04:11:14 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:00.010 04:11:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:00.010 04:11:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:00.010 04:11:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:00.010 04:11:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:00.010 04:11:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:20:00.010 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:00.010 fio-3.35 00:20:00.010 Starting 1 thread 00:20:00.010 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.539 00:20:02.539 test: (groupid=0, jobs=1): err= 0: pid=371822: Fri Apr 19 04:11:16 2024 00:20:02.539 read: IOPS=13.9k, BW=217MiB/s (228MB/s)(431MiB/1984msec) 00:20:02.539 slat (nsec): min=2128, max=50543, avg=2395.20, stdev=964.46 00:20:02.539 clat (usec): min=284, max=8248, avg=1631.39, stdev=1014.25 00:20:02.539 lat (usec): min=287, max=8268, avg=1633.79, stdev=1014.60 00:20:02.539 clat percentiles (usec): 00:20:02.539 | 1.00th=[ 553], 5.00th=[ 791], 10.00th=[ 914], 20.00th=[ 1074], 00:20:02.539 | 30.00th=[ 1188], 40.00th=[ 1270], 50.00th=[ 1369], 60.00th=[ 1483], 00:20:02.539 | 70.00th=[ 1631], 80.00th=[ 1827], 90.00th=[ 2245], 95.00th=[ 4555], 00:20:02.539 | 99.00th=[ 5997], 99.50th=[ 6587], 99.90th=[ 7373], 99.95th=[ 7701], 00:20:02.539 | 99.99th=[ 8160] 00:20:02.539 bw ( KiB/s): min=109408, max=109696, per=49.20%, avg=109544.00, stdev=157.58, samples=4 00:20:02.539 iops : min= 6838, max= 6856, avg=6846.50, stdev= 9.85, samples=4 00:20:02.540 write: IOPS=7630, BW=119MiB/s (125MB/s)(223MiB/1868msec); 0 zone resets 00:20:02.540 slat (usec): min=25, max=120, avg=27.70, stdev= 5.61 00:20:02.540 clat (usec): min=4262, max=20456, avg=13296.56, stdev=1772.68 00:20:02.540 lat (usec): min=4289, max=20485, avg=13324.26, stdev=1772.53 00:20:02.540 clat percentiles (usec): 00:20:02.540 | 1.00th=[ 6915], 5.00th=[10814], 10.00th=[11469], 20.00th=[11994], 00:20:02.540 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13304], 60.00th=[13698], 00:20:02.540 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15401], 95.00th=[16057], 00:20:02.540 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19530], 99.95th=[19792], 00:20:02.540 | 99.99th=[20317] 00:20:02.540 bw ( KiB/s): min=111136, max=114880, per=92.65%, avg=113112.00, stdev=1544.06, samples=4 00:20:02.540 iops : min= 6946, max= 7180, avg=7069.50, stdev=96.50, samples=4 00:20:02.540 lat (usec) : 500=0.39%, 750=2.23%, 1000=7.23% 00:20:02.540 lat (msec) : 2=46.49%, 4=6.01%, 10=4.22%, 20=33.41%, 50=0.01% 00:20:02.540 cpu : usr=97.41%, sys=0.90%, ctx=186, majf=0, minf=2 00:20:02.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:20:02.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:02.540 issued rwts: total=27607,14254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:02.540 00:20:02.540 Run status group 0 (all jobs): 00:20:02.540 READ: bw=217MiB/s (228MB/s), 217MiB/s-217MiB/s (228MB/s-228MB/s), io=431MiB (452MB), run=1984-1984msec 00:20:02.540 WRITE: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=223MiB (234MB), run=1868-1868msec 00:20:02.540 04:11:16 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.540 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.540 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.540 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.540 04:11:16 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:20:02.540 04:11:16 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:20:02.540 04:11:16 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:20:02.540 04:11:16 -- host/fio.sh@84 -- # nvmftestfini 00:20:02.540 04:11:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:02.540 04:11:16 -- nvmf/common.sh@117 -- # sync 00:20:02.540 04:11:16 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:02.540 04:11:16 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:02.540 04:11:16 -- nvmf/common.sh@120 -- # set +e 00:20:02.540 04:11:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.540 04:11:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:02.540 rmmod nvme_rdma 00:20:02.540 rmmod nvme_fabrics 00:20:02.540 04:11:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.540 04:11:16 -- nvmf/common.sh@124 -- # set -e 00:20:02.540 04:11:16 -- nvmf/common.sh@125 -- # return 0 00:20:02.540 04:11:16 -- nvmf/common.sh@478 -- # '[' -n 370754 ']' 00:20:02.540 04:11:16 -- nvmf/common.sh@479 -- # killprocess 370754 00:20:02.540 04:11:16 -- common/autotest_common.sh@936 -- # '[' -z 370754 ']' 00:20:02.540 04:11:16 -- common/autotest_common.sh@940 -- # kill -0 370754 00:20:02.540 04:11:16 -- common/autotest_common.sh@941 -- # uname 00:20:02.540 04:11:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.540 04:11:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 370754 00:20:02.540 04:11:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:02.540 04:11:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:02.540 04:11:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 370754' 00:20:02.540 killing process with pid 370754 00:20:02.540 04:11:16 -- common/autotest_common.sh@955 -- # kill 370754 00:20:02.540 04:11:16 -- common/autotest_common.sh@960 -- # wait 370754 00:20:02.799 04:11:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:02.799 04:11:17 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:02.799 00:20:02.799 real 0m12.460s 00:20:02.799 user 0m48.424s 00:20:02.799 sys 0m4.951s 00:20:02.799 04:11:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:02.799 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:02.799 ************************************ 00:20:02.799 END TEST nvmf_fio_host 00:20:02.799 ************************************ 00:20:02.799 04:11:17 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:02.799 04:11:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:02.799 04:11:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:02.799 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.058 ************************************ 00:20:03.058 START TEST nvmf_failover 00:20:03.058 ************************************ 00:20:03.058 04:11:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:20:03.058 * Looking for test storage... 00:20:03.058 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:03.058 04:11:17 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.058 04:11:17 -- nvmf/common.sh@7 -- # uname -s 00:20:03.058 04:11:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.058 04:11:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.058 04:11:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.058 04:11:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.058 04:11:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.058 04:11:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.058 04:11:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.058 04:11:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.058 04:11:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.058 04:11:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.058 04:11:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:03.058 04:11:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:03.058 04:11:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.058 04:11:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.058 04:11:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.058 04:11:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.058 04:11:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:03.059 04:11:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.059 04:11:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.059 04:11:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.059 04:11:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.059 04:11:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.059 04:11:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.059 04:11:17 -- paths/export.sh@5 -- # export PATH 00:20:03.059 04:11:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.059 04:11:17 -- nvmf/common.sh@47 -- # : 0 00:20:03.059 04:11:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:03.059 04:11:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:03.059 04:11:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.059 04:11:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.059 04:11:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.059 04:11:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:03.059 04:11:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:03.059 04:11:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:03.059 04:11:17 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:03.059 04:11:17 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:03.059 04:11:17 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:03.059 04:11:17 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:03.059 04:11:17 -- host/failover.sh@18 -- # nvmftestinit 00:20:03.059 04:11:17 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:03.059 04:11:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.059 04:11:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:03.059 04:11:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:03.059 04:11:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:03.059 04:11:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.059 04:11:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.059 04:11:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.059 04:11:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:03.059 04:11:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:03.059 04:11:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:03.059 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:08.329 04:11:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:08.329 04:11:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:08.329 04:11:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:08.329 04:11:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:08.329 04:11:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:08.329 04:11:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:08.329 04:11:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:08.329 04:11:22 -- nvmf/common.sh@295 -- # net_devs=() 00:20:08.329 04:11:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:08.329 04:11:22 -- nvmf/common.sh@296 -- # e810=() 00:20:08.329 04:11:22 -- nvmf/common.sh@296 -- # local -ga e810 00:20:08.329 04:11:22 -- nvmf/common.sh@297 -- # x722=() 00:20:08.329 04:11:22 -- nvmf/common.sh@297 -- # local -ga x722 00:20:08.329 04:11:22 -- nvmf/common.sh@298 -- # mlx=() 00:20:08.329 04:11:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:08.329 04:11:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.329 04:11:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.329 04:11:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.329 04:11:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.329 04:11:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.329 04:11:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.329 04:11:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.329 04:11:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.329 04:11:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.329 04:11:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.329 04:11:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.329 04:11:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.329 04:11:22 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:08.329 04:11:22 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:08.329 04:11:22 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:08.329 04:11:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.329 04:11:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.329 04:11:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:08.329 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:08.329 04:11:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:08.329 04:11:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.329 04:11:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:08.329 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:08.329 04:11:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:08.329 04:11:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:08.329 04:11:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.330 04:11:22 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:08.330 04:11:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.330 04:11:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.330 04:11:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:08.330 04:11:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.330 04:11:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:08.330 Found net devices under 0000:18:00.0: mlx_0_0 00:20:08.330 04:11:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.330 04:11:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.330 04:11:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.330 04:11:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:08.330 04:11:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.330 04:11:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:08.330 Found net devices under 0000:18:00.1: mlx_0_1 00:20:08.330 04:11:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.330 04:11:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:08.330 04:11:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:08.330 04:11:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:08.330 04:11:22 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:08.330 04:11:22 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:08.330 04:11:22 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:08.330 04:11:22 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:08.330 04:11:22 -- nvmf/common.sh@58 -- # uname 00:20:08.330 04:11:22 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:08.330 04:11:22 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:08.330 04:11:22 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:08.330 04:11:22 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:08.330 04:11:22 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:08.330 04:11:22 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:08.330 04:11:22 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:08.330 04:11:22 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:08.330 04:11:22 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:08.330 04:11:22 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:08.330 04:11:22 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:08.330 04:11:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:08.330 04:11:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:08.330 04:11:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:08.330 04:11:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:08.589 04:11:22 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:08.589 04:11:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:08.589 04:11:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.589 04:11:22 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:08.589 04:11:22 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:08.589 04:11:22 -- nvmf/common.sh@105 -- # continue 2 00:20:08.589 04:11:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:08.589 04:11:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.589 04:11:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:08.589 04:11:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.589 04:11:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:08.589 04:11:22 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:08.589 04:11:22 -- nvmf/common.sh@105 -- # continue 2 00:20:08.589 04:11:22 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:08.589 04:11:22 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:08.589 04:11:22 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:08.589 04:11:22 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:08.589 04:11:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:08.589 04:11:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:08.589 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:08.589 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:20:08.589 altname enp24s0f0np0 00:20:08.589 altname ens785f0np0 00:20:08.589 inet 192.168.100.8/24 scope global mlx_0_0 00:20:08.589 valid_lft forever preferred_lft forever 00:20:08.589 04:11:22 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:08.589 04:11:22 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:08.589 04:11:22 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:08.589 04:11:22 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:08.589 04:11:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:08.589 04:11:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:08.589 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:08.589 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:20:08.589 altname enp24s0f1np1 00:20:08.589 altname ens785f1np1 00:20:08.589 inet 192.168.100.9/24 scope global mlx_0_1 00:20:08.589 valid_lft forever preferred_lft forever 00:20:08.589 04:11:22 -- nvmf/common.sh@411 -- # return 0 00:20:08.589 04:11:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:08.589 04:11:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:08.589 04:11:22 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:08.589 04:11:22 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:08.589 04:11:22 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:08.589 04:11:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:08.589 04:11:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:08.589 04:11:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:08.589 04:11:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:08.589 04:11:22 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:08.589 04:11:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:08.589 04:11:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.589 04:11:22 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:08.589 04:11:22 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:08.589 04:11:22 -- nvmf/common.sh@105 -- # continue 2 00:20:08.589 04:11:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:08.589 04:11:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.589 04:11:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:08.589 04:11:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:08.589 04:11:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:08.589 04:11:22 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:08.589 04:11:22 -- nvmf/common.sh@105 -- # continue 2 00:20:08.589 04:11:22 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:08.589 04:11:22 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:08.589 04:11:22 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:08.589 04:11:22 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:08.589 04:11:22 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:08.589 04:11:22 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:08.589 04:11:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:08.589 04:11:22 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:08.589 192.168.100.9' 00:20:08.589 04:11:22 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:08.589 192.168.100.9' 00:20:08.589 04:11:22 -- nvmf/common.sh@446 -- # head -n 1 00:20:08.589 04:11:22 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:08.589 04:11:22 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:08.589 192.168.100.9' 00:20:08.589 04:11:22 -- nvmf/common.sh@447 -- # tail -n +2 00:20:08.589 04:11:22 -- nvmf/common.sh@447 -- # head -n 1 00:20:08.589 04:11:22 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:08.589 04:11:22 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:08.589 04:11:22 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:08.589 04:11:22 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:08.589 04:11:22 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:08.589 04:11:22 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:08.589 04:11:22 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:08.589 04:11:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:08.589 04:11:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:08.589 04:11:22 -- common/autotest_common.sh@10 -- # set +x 00:20:08.589 04:11:22 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:08.589 04:11:22 -- nvmf/common.sh@470 -- # nvmfpid=375593 00:20:08.589 04:11:22 -- nvmf/common.sh@471 -- # waitforlisten 375593 00:20:08.589 04:11:22 -- common/autotest_common.sh@817 -- # '[' -z 375593 ']' 00:20:08.589 04:11:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.589 04:11:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:08.589 04:11:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.589 04:11:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:08.589 04:11:22 -- common/autotest_common.sh@10 -- # set +x 00:20:08.589 [2024-04-19 04:11:23.022208] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:20:08.589 [2024-04-19 04:11:23.022248] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.589 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.589 [2024-04-19 04:11:23.068648] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:08.846 [2024-04-19 04:11:23.143278] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.846 [2024-04-19 04:11:23.143311] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.846 [2024-04-19 04:11:23.143318] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.846 [2024-04-19 04:11:23.143323] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.846 [2024-04-19 04:11:23.143328] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.846 [2024-04-19 04:11:23.143424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.846 [2024-04-19 04:11:23.143638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.846 [2024-04-19 04:11:23.143640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.412 04:11:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:09.412 04:11:23 -- common/autotest_common.sh@850 -- # return 0 00:20:09.412 04:11:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:09.412 04:11:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:09.412 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:20:09.412 04:11:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.412 04:11:23 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:09.670 [2024-04-19 04:11:24.001584] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x785ee0/0x78a3d0) succeed. 00:20:09.670 [2024-04-19 04:11:24.010798] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x787430/0x7cba60) succeed. 00:20:09.670 04:11:24 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:09.928 Malloc0 00:20:09.928 04:11:24 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:10.186 04:11:24 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:10.186 04:11:24 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:10.443 [2024-04-19 04:11:24.773913] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:10.443 04:11:24 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:10.443 [2024-04-19 04:11:24.942187] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:10.443 04:11:24 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:10.700 [2024-04-19 04:11:25.106765] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:20:10.700 04:11:25 -- host/failover.sh@31 -- # bdevperf_pid=375890 00:20:10.701 04:11:25 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.701 04:11:25 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:10.701 04:11:25 -- host/failover.sh@34 -- # waitforlisten 375890 /var/tmp/bdevperf.sock 00:20:10.701 04:11:25 -- common/autotest_common.sh@817 -- # '[' -z 375890 ']' 00:20:10.701 04:11:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.701 04:11:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:10.701 04:11:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.701 04:11:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:10.701 04:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:11.631 04:11:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:11.631 04:11:25 -- common/autotest_common.sh@850 -- # return 0 00:20:11.631 04:11:25 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:11.889 NVMe0n1 00:20:11.889 04:11:26 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:11.889 00:20:12.146 04:11:26 -- host/failover.sh@39 -- # run_test_pid=376155 00:20:12.146 04:11:26 -- host/failover.sh@41 -- # sleep 1 00:20:12.146 04:11:26 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:13.078 04:11:27 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:13.078 04:11:27 -- host/failover.sh@45 -- # sleep 3 00:20:16.353 04:11:30 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:16.353 00:20:16.353 04:11:30 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:16.609 04:11:31 -- host/failover.sh@50 -- # sleep 3 00:20:19.886 04:11:34 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:19.886 [2024-04-19 04:11:34.163217] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:19.886 04:11:34 -- host/failover.sh@55 -- # sleep 1 00:20:20.817 04:11:35 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:21.074 04:11:35 -- host/failover.sh@59 -- # wait 376155 00:20:27.631 0 00:20:27.631 04:11:41 -- host/failover.sh@61 -- # killprocess 375890 00:20:27.631 04:11:41 -- common/autotest_common.sh@936 -- # '[' -z 375890 ']' 00:20:27.631 04:11:41 -- common/autotest_common.sh@940 -- # kill -0 375890 00:20:27.631 04:11:41 -- common/autotest_common.sh@941 -- # uname 00:20:27.631 04:11:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:27.631 04:11:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 375890 00:20:27.631 04:11:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:27.631 04:11:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:27.631 04:11:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 375890' 00:20:27.631 killing process with pid 375890 00:20:27.631 04:11:41 -- common/autotest_common.sh@955 -- # kill 375890 00:20:27.631 04:11:41 -- common/autotest_common.sh@960 -- # wait 375890 00:20:27.631 04:11:41 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:27.631 [2024-04-19 04:11:25.176047] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:20:27.631 [2024-04-19 04:11:25.176095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375890 ] 00:20:27.631 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.631 [2024-04-19 04:11:25.225091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.631 [2024-04-19 04:11:25.293440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.631 Running I/O for 15 seconds... 00:20:27.631 [2024-04-19 04:11:28.586079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.631 [2024-04-19 04:11:28.586302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.631 [2024-04-19 04:11:28.586308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.632 [2024-04-19 04:11:28.586934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.586948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.586962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.586975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.586988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.586995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.587008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.587021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.587034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.587049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.587061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.587074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.587087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.587100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.587113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.587125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.632 [2024-04-19 04:11:28.587139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x186f00 00:20:27.632 [2024-04-19 04:11:28.587145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.587758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:28.587763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.589589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.633 [2024-04-19 04:11:28.589602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.633 [2024-04-19 04:11:28.589607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38392 len:8 PRP1 0x0 PRP2 0x0 00:20:27.633 [2024-04-19 04:11:28.589618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:28.589651] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a00 was disconnected and freed. reset controller. 00:20:27.633 [2024-04-19 04:11:28.589659] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:20:27.633 [2024-04-19 04:11:28.589666] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:27.633 [2024-04-19 04:11:28.592203] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:27.633 [2024-04-19 04:11:28.605593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:27.633 [2024-04-19 04:11:28.637307] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:27.633 [2024-04-19 04:11:32.001820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:32.001854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:32.001868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:32.001875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:32.001883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:32.001889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:32.001897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:32.001903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:32.001910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:32.001916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:32.001923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:32.001929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:32.001937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x186f00 00:20:27.633 [2024-04-19 04:11:32.001942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:32.001950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.633 [2024-04-19 04:11:32.001956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:32.001968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.633 [2024-04-19 04:11:32.001974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:32.001981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.633 [2024-04-19 04:11:32.001986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:32.001993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.633 [2024-04-19 04:11:32.001999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.633 [2024-04-19 04:11:32.002006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x186f00 00:20:27.634 [2024-04-19 04:11:32.002580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.634 [2024-04-19 04:11:32.002701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.634 [2024-04-19 04:11:32.002706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x186f00 00:20:27.635 [2024-04-19 04:11:32.002899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.002988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.002995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.635 [2024-04-19 04:11:32.003415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x186f00 00:20:27.635 [2024-04-19 04:11:32.003428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x186f00 00:20:27.635 [2024-04-19 04:11:32.003442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x186f00 00:20:27.635 [2024-04-19 04:11:32.003454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x186f00 00:20:27.635 [2024-04-19 04:11:32.003467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x186f00 00:20:27.635 [2024-04-19 04:11:32.003481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x186f00 00:20:27.635 [2024-04-19 04:11:32.003494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.003501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x186f00 00:20:27.635 [2024-04-19 04:11:32.003507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.005240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.635 [2024-04-19 04:11:32.005252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.635 [2024-04-19 04:11:32.005258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24888 len:8 PRP1 0x0 PRP2 0x0 00:20:27.635 [2024-04-19 04:11:32.005264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.635 [2024-04-19 04:11:32.005298] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4940 was disconnected and freed. reset controller. 00:20:27.635 [2024-04-19 04:11:32.005306] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:20:27.635 [2024-04-19 04:11:32.005312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:27.635 [2024-04-19 04:11:32.007844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:27.635 [2024-04-19 04:11:32.021137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:27.635 [2024-04-19 04:11:32.060318] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:27.635 [2024-04-19 04:11:36.337620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.337654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.337988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.337994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.636 [2024-04-19 04:11:36.338242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x186f00 00:20:27.636 [2024-04-19 04:11:36.338440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.636 [2024-04-19 04:11:36.338448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.338453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.338775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.338789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.338801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.338990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.338995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.637 [2024-04-19 04:11:36.339008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x186f00 00:20:27.637 [2024-04-19 04:11:36.339284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.637 [2024-04-19 04:11:36.339291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x186f00 00:20:27.638 [2024-04-19 04:11:36.339297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.638 [2024-04-19 04:11:36.339305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x186f00 00:20:27.638 [2024-04-19 04:11:36.339313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:92e0 p:0 m:0 dnr:0 00:20:27.638 [2024-04-19 04:11:36.341118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.638 [2024-04-19 04:11:36.341129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.638 [2024-04-19 04:11:36.341135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32712 len:8 PRP1 0x0 PRP2 0x0 00:20:27.638 [2024-04-19 04:11:36.341142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.638 [2024-04-19 04:11:36.341175] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4940 was disconnected and freed. reset controller. 00:20:27.638 [2024-04-19 04:11:36.341184] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:20:27.638 [2024-04-19 04:11:36.341190] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:27.638 [2024-04-19 04:11:36.343726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:27.638 [2024-04-19 04:11:36.356773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:27.638 [2024-04-19 04:11:36.392374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:27.638 00:20:27.638 Latency(us) 00:20:27.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.638 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:27.638 Verification LBA range: start 0x0 length 0x4000 00:20:27.638 NVMe0n1 : 15.01 15713.48 61.38 259.44 0.00 7994.00 326.16 1019060.53 00:20:27.638 =================================================================================================================== 00:20:27.638 Total : 15713.48 61.38 259.44 0.00 7994.00 326.16 1019060.53 00:20:27.638 Received shutdown signal, test time was about 15.000000 seconds 00:20:27.638 00:20:27.638 Latency(us) 00:20:27.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.638 =================================================================================================================== 00:20:27.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.638 04:11:41 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:27.638 04:11:41 -- host/failover.sh@65 -- # count=3 00:20:27.638 04:11:41 -- host/failover.sh@67 -- # (( count != 3 )) 00:20:27.638 04:11:41 -- host/failover.sh@73 -- # bdevperf_pid=378884 00:20:27.638 04:11:41 -- host/failover.sh@75 -- # waitforlisten 378884 /var/tmp/bdevperf.sock 00:20:27.638 04:11:41 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:27.638 04:11:41 -- common/autotest_common.sh@817 -- # '[' -z 378884 ']' 00:20:27.638 04:11:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.638 04:11:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:27.638 04:11:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.638 04:11:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:27.638 04:11:41 -- common/autotest_common.sh@10 -- # set +x 00:20:28.204 04:11:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:28.204 04:11:42 -- common/autotest_common.sh@850 -- # return 0 00:20:28.204 04:11:42 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:20:28.462 [2024-04-19 04:11:42.795313] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:28.462 04:11:42 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:20:28.462 [2024-04-19 04:11:42.959845] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:20:28.462 04:11:42 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:28.720 NVMe0n1 00:20:28.720 04:11:43 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:28.977 00:20:28.977 04:11:43 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:29.234 00:20:29.234 04:11:43 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:29.234 04:11:43 -- host/failover.sh@82 -- # grep -q NVMe0 00:20:29.492 04:11:43 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:29.492 04:11:44 -- host/failover.sh@87 -- # sleep 3 00:20:32.771 04:11:47 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:32.771 04:11:47 -- host/failover.sh@88 -- # grep -q NVMe0 00:20:32.771 04:11:47 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:32.771 04:11:47 -- host/failover.sh@90 -- # run_test_pid=379833 00:20:32.771 04:11:47 -- host/failover.sh@92 -- # wait 379833 00:20:34.143 0 00:20:34.143 04:11:48 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:34.143 [2024-04-19 04:11:41.883399] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:20:34.143 [2024-04-19 04:11:41.883452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378884 ] 00:20:34.143 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.143 [2024-04-19 04:11:41.934855] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.143 [2024-04-19 04:11:41.996875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.143 [2024-04-19 04:11:43.976264] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:20:34.143 [2024-04-19 04:11:43.976857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:34.143 [2024-04-19 04:11:43.976886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:34.143 [2024-04-19 04:11:43.990757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:34.143 [2024-04-19 04:11:44.006900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:34.143 Running I/O for 1 seconds... 00:20:34.143 00:20:34.143 Latency(us) 00:20:34.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.143 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:34.143 Verification LBA range: start 0x0 length 0x4000 00:20:34.143 NVMe0n1 : 1.01 19607.92 76.59 0.00 0.00 6493.91 2512.21 15825.73 00:20:34.143 =================================================================================================================== 00:20:34.143 Total : 19607.92 76.59 0.00 0.00 6493.91 2512.21 15825.73 00:20:34.143 04:11:48 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:34.143 04:11:48 -- host/failover.sh@95 -- # grep -q NVMe0 00:20:34.143 04:11:48 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:34.143 04:11:48 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:34.143 04:11:48 -- host/failover.sh@99 -- # grep -q NVMe0 00:20:34.401 04:11:48 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:34.658 04:11:48 -- host/failover.sh@101 -- # sleep 3 00:20:37.941 04:11:51 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:37.941 04:11:51 -- host/failover.sh@103 -- # grep -q NVMe0 00:20:37.941 04:11:52 -- host/failover.sh@108 -- # killprocess 378884 00:20:37.941 04:11:52 -- common/autotest_common.sh@936 -- # '[' -z 378884 ']' 00:20:37.941 04:11:52 -- common/autotest_common.sh@940 -- # kill -0 378884 00:20:37.942 04:11:52 -- common/autotest_common.sh@941 -- # uname 00:20:37.942 04:11:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:37.942 04:11:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 378884 00:20:37.942 04:11:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:37.942 04:11:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:37.942 04:11:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 378884' 00:20:37.942 killing process with pid 378884 00:20:37.942 04:11:52 -- common/autotest_common.sh@955 -- # kill 378884 00:20:37.942 04:11:52 -- common/autotest_common.sh@960 -- # wait 378884 00:20:37.942 04:11:52 -- host/failover.sh@110 -- # sync 00:20:37.942 04:11:52 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.200 04:11:52 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:38.200 04:11:52 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:38.200 04:11:52 -- host/failover.sh@116 -- # nvmftestfini 00:20:38.200 04:11:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:38.200 04:11:52 -- nvmf/common.sh@117 -- # sync 00:20:38.200 04:11:52 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:38.200 04:11:52 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:38.200 04:11:52 -- nvmf/common.sh@120 -- # set +e 00:20:38.200 04:11:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.200 04:11:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:38.200 rmmod nvme_rdma 00:20:38.200 rmmod nvme_fabrics 00:20:38.201 04:11:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:38.201 04:11:52 -- nvmf/common.sh@124 -- # set -e 00:20:38.201 04:11:52 -- nvmf/common.sh@125 -- # return 0 00:20:38.201 04:11:52 -- nvmf/common.sh@478 -- # '[' -n 375593 ']' 00:20:38.201 04:11:52 -- nvmf/common.sh@479 -- # killprocess 375593 00:20:38.201 04:11:52 -- common/autotest_common.sh@936 -- # '[' -z 375593 ']' 00:20:38.201 04:11:52 -- common/autotest_common.sh@940 -- # kill -0 375593 00:20:38.201 04:11:52 -- common/autotest_common.sh@941 -- # uname 00:20:38.201 04:11:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:38.201 04:11:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 375593 00:20:38.201 04:11:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:38.201 04:11:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:38.201 04:11:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 375593' 00:20:38.201 killing process with pid 375593 00:20:38.201 04:11:52 -- common/autotest_common.sh@955 -- # kill 375593 00:20:38.201 04:11:52 -- common/autotest_common.sh@960 -- # wait 375593 00:20:38.460 04:11:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:38.460 04:11:52 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:38.460 00:20:38.460 real 0m35.569s 00:20:38.460 user 2m1.556s 00:20:38.460 sys 0m5.960s 00:20:38.460 04:11:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:38.460 04:11:52 -- common/autotest_common.sh@10 -- # set +x 00:20:38.460 ************************************ 00:20:38.460 END TEST nvmf_failover 00:20:38.460 ************************************ 00:20:38.460 04:11:52 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:20:38.460 04:11:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:38.460 04:11:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:38.460 04:11:52 -- common/autotest_common.sh@10 -- # set +x 00:20:38.720 ************************************ 00:20:38.720 START TEST nvmf_discovery 00:20:38.720 ************************************ 00:20:38.720 04:11:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:20:38.720 * Looking for test storage... 00:20:38.720 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:38.720 04:11:53 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.720 04:11:53 -- nvmf/common.sh@7 -- # uname -s 00:20:38.720 04:11:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.720 04:11:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.720 04:11:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.720 04:11:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.720 04:11:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.720 04:11:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.720 04:11:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.720 04:11:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.720 04:11:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.720 04:11:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.720 04:11:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:38.720 04:11:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:38.720 04:11:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.720 04:11:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.720 04:11:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.720 04:11:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.720 04:11:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:38.720 04:11:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.720 04:11:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.720 04:11:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.720 04:11:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.720 04:11:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.720 04:11:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.720 04:11:53 -- paths/export.sh@5 -- # export PATH 00:20:38.720 04:11:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.720 04:11:53 -- nvmf/common.sh@47 -- # : 0 00:20:38.720 04:11:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:38.720 04:11:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:38.720 04:11:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.720 04:11:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.720 04:11:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.720 04:11:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:38.720 04:11:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:38.720 04:11:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:38.720 04:11:53 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:20:38.720 04:11:53 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:20:38.720 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:38.720 04:11:53 -- host/discovery.sh@13 -- # exit 0 00:20:38.720 00:20:38.720 real 0m0.107s 00:20:38.720 user 0m0.039s 00:20:38.720 sys 0m0.075s 00:20:38.720 04:11:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:38.720 04:11:53 -- common/autotest_common.sh@10 -- # set +x 00:20:38.720 ************************************ 00:20:38.720 END TEST nvmf_discovery 00:20:38.720 ************************************ 00:20:38.720 04:11:53 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:20:38.720 04:11:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:38.720 04:11:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:38.720 04:11:53 -- common/autotest_common.sh@10 -- # set +x 00:20:38.979 ************************************ 00:20:38.979 START TEST nvmf_discovery_remove_ifc 00:20:38.979 ************************************ 00:20:38.979 04:11:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:20:38.979 * Looking for test storage... 00:20:38.979 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:38.979 04:11:53 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.979 04:11:53 -- nvmf/common.sh@7 -- # uname -s 00:20:38.979 04:11:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.979 04:11:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.979 04:11:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.979 04:11:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.979 04:11:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.979 04:11:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.979 04:11:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.979 04:11:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.979 04:11:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.979 04:11:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.979 04:11:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:38.979 04:11:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:38.979 04:11:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.979 04:11:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.979 04:11:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.979 04:11:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.979 04:11:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:38.979 04:11:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.979 04:11:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.979 04:11:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.979 04:11:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.979 04:11:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.979 04:11:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.979 04:11:53 -- paths/export.sh@5 -- # export PATH 00:20:38.979 04:11:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.979 04:11:53 -- nvmf/common.sh@47 -- # : 0 00:20:38.979 04:11:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:38.979 04:11:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:38.979 04:11:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.979 04:11:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.979 04:11:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.979 04:11:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:38.979 04:11:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:38.979 04:11:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:38.979 04:11:53 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:20:38.979 04:11:53 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:20:38.979 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:38.979 04:11:53 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:20:38.979 00:20:38.979 real 0m0.109s 00:20:38.979 user 0m0.054s 00:20:38.979 sys 0m0.063s 00:20:38.979 04:11:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:38.979 04:11:53 -- common/autotest_common.sh@10 -- # set +x 00:20:38.979 ************************************ 00:20:38.979 END TEST nvmf_discovery_remove_ifc 00:20:38.979 ************************************ 00:20:38.979 04:11:53 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:20:38.979 04:11:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:38.979 04:11:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:38.979 04:11:53 -- common/autotest_common.sh@10 -- # set +x 00:20:39.239 ************************************ 00:20:39.239 START TEST nvmf_identify_kernel_target 00:20:39.239 ************************************ 00:20:39.239 04:11:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:20:39.239 * Looking for test storage... 00:20:39.239 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:39.239 04:11:53 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.239 04:11:53 -- nvmf/common.sh@7 -- # uname -s 00:20:39.239 04:11:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.239 04:11:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.239 04:11:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.239 04:11:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.239 04:11:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.239 04:11:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.239 04:11:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.239 04:11:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.239 04:11:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.239 04:11:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.239 04:11:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:39.239 04:11:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:39.239 04:11:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.239 04:11:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.239 04:11:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.239 04:11:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.239 04:11:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:39.239 04:11:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.239 04:11:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.239 04:11:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.239 04:11:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.239 04:11:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.239 04:11:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.239 04:11:53 -- paths/export.sh@5 -- # export PATH 00:20:39.239 04:11:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.239 04:11:53 -- nvmf/common.sh@47 -- # : 0 00:20:39.239 04:11:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:39.239 04:11:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:39.239 04:11:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.239 04:11:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.239 04:11:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.239 04:11:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:39.239 04:11:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:39.239 04:11:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:39.239 04:11:53 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:39.239 04:11:53 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:39.239 04:11:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.239 04:11:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:39.239 04:11:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:39.239 04:11:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:39.239 04:11:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.239 04:11:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.239 04:11:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.240 04:11:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:39.240 04:11:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:39.240 04:11:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:39.240 04:11:53 -- common/autotest_common.sh@10 -- # set +x 00:20:45.803 04:11:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:45.803 04:11:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:45.803 04:11:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:45.803 04:11:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:45.803 04:11:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:45.803 04:11:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:45.803 04:11:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:45.803 04:11:59 -- nvmf/common.sh@295 -- # net_devs=() 00:20:45.803 04:11:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:45.803 04:11:59 -- nvmf/common.sh@296 -- # e810=() 00:20:45.803 04:11:59 -- nvmf/common.sh@296 -- # local -ga e810 00:20:45.803 04:11:59 -- nvmf/common.sh@297 -- # x722=() 00:20:45.803 04:11:59 -- nvmf/common.sh@297 -- # local -ga x722 00:20:45.803 04:11:59 -- nvmf/common.sh@298 -- # mlx=() 00:20:45.803 04:11:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:45.803 04:11:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.803 04:11:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.803 04:11:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.803 04:11:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.803 04:11:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.803 04:11:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.803 04:11:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.803 04:11:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.803 04:11:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.803 04:11:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.803 04:11:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.803 04:11:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:45.803 04:11:59 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:45.803 04:11:59 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:45.803 04:11:59 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:45.803 04:11:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:45.803 04:11:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.803 04:11:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:45.803 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:45.803 04:11:59 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:45.803 04:11:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.803 04:11:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:45.803 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:45.803 04:11:59 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:45.803 04:11:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:45.803 04:11:59 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.803 04:11:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.803 04:11:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:45.803 04:11:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.803 04:11:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:45.803 Found net devices under 0000:18:00.0: mlx_0_0 00:20:45.803 04:11:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.803 04:11:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.803 04:11:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.803 04:11:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:45.803 04:11:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.803 04:11:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:45.803 Found net devices under 0000:18:00.1: mlx_0_1 00:20:45.803 04:11:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.803 04:11:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:45.803 04:11:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:45.803 04:11:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:45.803 04:11:59 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:45.803 04:11:59 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:45.803 04:11:59 -- nvmf/common.sh@58 -- # uname 00:20:45.803 04:11:59 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:45.803 04:11:59 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:45.803 04:11:59 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:45.803 04:11:59 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:45.803 04:11:59 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:45.803 04:11:59 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:45.803 04:11:59 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:45.803 04:11:59 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:45.803 04:11:59 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:45.803 04:11:59 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:45.803 04:11:59 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:45.803 04:11:59 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:45.803 04:11:59 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:45.803 04:11:59 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:45.803 04:11:59 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:45.803 04:11:59 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:45.803 04:11:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:45.803 04:11:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.804 04:11:59 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:45.804 04:11:59 -- nvmf/common.sh@105 -- # continue 2 00:20:45.804 04:11:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:45.804 04:11:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.804 04:11:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.804 04:11:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:45.804 04:11:59 -- nvmf/common.sh@105 -- # continue 2 00:20:45.804 04:11:59 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:45.804 04:11:59 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:45.804 04:11:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:45.804 04:11:59 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:45.804 04:11:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:45.804 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:45.804 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:20:45.804 altname enp24s0f0np0 00:20:45.804 altname ens785f0np0 00:20:45.804 inet 192.168.100.8/24 scope global mlx_0_0 00:20:45.804 valid_lft forever preferred_lft forever 00:20:45.804 04:11:59 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:45.804 04:11:59 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:45.804 04:11:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:45.804 04:11:59 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:45.804 04:11:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:45.804 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:45.804 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:20:45.804 altname enp24s0f1np1 00:20:45.804 altname ens785f1np1 00:20:45.804 inet 192.168.100.9/24 scope global mlx_0_1 00:20:45.804 valid_lft forever preferred_lft forever 00:20:45.804 04:11:59 -- nvmf/common.sh@411 -- # return 0 00:20:45.804 04:11:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:45.804 04:11:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:45.804 04:11:59 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:45.804 04:11:59 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:45.804 04:11:59 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:45.804 04:11:59 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:45.804 04:11:59 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:45.804 04:11:59 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:45.804 04:11:59 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:45.804 04:11:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:45.804 04:11:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.804 04:11:59 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:45.804 04:11:59 -- nvmf/common.sh@105 -- # continue 2 00:20:45.804 04:11:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:45.804 04:11:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.804 04:11:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:45.804 04:11:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:45.804 04:11:59 -- nvmf/common.sh@105 -- # continue 2 00:20:45.804 04:11:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:45.804 04:11:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:45.804 04:11:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:45.804 04:11:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:45.804 04:11:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:45.804 04:11:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:45.804 04:11:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:45.804 04:11:59 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:45.804 192.168.100.9' 00:20:45.804 04:11:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:45.804 192.168.100.9' 00:20:45.804 04:11:59 -- nvmf/common.sh@446 -- # head -n 1 00:20:45.804 04:11:59 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:45.804 04:11:59 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:45.804 192.168.100.9' 00:20:45.804 04:11:59 -- nvmf/common.sh@447 -- # tail -n +2 00:20:45.804 04:11:59 -- nvmf/common.sh@447 -- # head -n 1 00:20:45.804 04:11:59 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:45.804 04:11:59 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:45.804 04:11:59 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:45.804 04:11:59 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:45.804 04:11:59 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:45.804 04:11:59 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:45.804 04:11:59 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:45.804 04:11:59 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:45.804 04:11:59 -- nvmf/common.sh@717 -- # local ip 00:20:45.804 04:11:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:45.804 04:11:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:45.804 04:11:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.804 04:11:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.804 04:11:59 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:20:45.804 04:11:59 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:20:45.804 04:11:59 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:20:45.804 04:11:59 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:20:45.804 04:11:59 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:20:45.804 04:11:59 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:20:45.804 04:11:59 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:45.804 04:11:59 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:45.804 04:11:59 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:45.804 04:11:59 -- nvmf/common.sh@628 -- # local block nvme 00:20:45.804 04:11:59 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@631 -- # modprobe nvmet 00:20:45.804 04:11:59 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:45.804 04:11:59 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:20:47.707 Waiting for block devices as requested 00:20:47.707 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:47.707 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:47.707 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:47.707 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:47.974 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:47.974 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:47.974 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:47.974 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:48.238 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:48.238 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:48.238 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:48.238 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:48.496 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:48.496 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:48.496 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:48.758 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:48.758 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:20:50.134 04:12:04 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:50.134 04:12:04 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:50.134 04:12:04 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:20:50.134 04:12:04 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:50.135 04:12:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:50.135 04:12:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:50.135 04:12:04 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:20:50.135 04:12:04 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:50.135 04:12:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:50.135 No valid GPT data, bailing 00:20:50.135 04:12:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:50.135 04:12:04 -- scripts/common.sh@391 -- # pt= 00:20:50.135 04:12:04 -- scripts/common.sh@392 -- # return 1 00:20:50.135 04:12:04 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:20:50.135 04:12:04 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:20:50.135 04:12:04 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:50.135 04:12:04 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:50.135 04:12:04 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:50.135 04:12:04 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:50.135 04:12:04 -- nvmf/common.sh@656 -- # echo 1 00:20:50.135 04:12:04 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:20:50.135 04:12:04 -- nvmf/common.sh@658 -- # echo 1 00:20:50.135 04:12:04 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:20:50.135 04:12:04 -- nvmf/common.sh@661 -- # echo rdma 00:20:50.135 04:12:04 -- nvmf/common.sh@662 -- # echo 4420 00:20:50.135 04:12:04 -- nvmf/common.sh@663 -- # echo ipv4 00:20:50.135 04:12:04 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:50.135 04:12:04 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:20:50.394 00:20:50.394 Discovery Log Number of Records 2, Generation counter 2 00:20:50.394 =====Discovery Log Entry 0====== 00:20:50.394 trtype: rdma 00:20:50.394 adrfam: ipv4 00:20:50.394 subtype: current discovery subsystem 00:20:50.394 treq: not specified, sq flow control disable supported 00:20:50.394 portid: 1 00:20:50.394 trsvcid: 4420 00:20:50.394 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:50.394 traddr: 192.168.100.8 00:20:50.394 eflags: none 00:20:50.394 rdma_prtype: not specified 00:20:50.394 rdma_qptype: connected 00:20:50.394 rdma_cms: rdma-cm 00:20:50.394 rdma_pkey: 0x0000 00:20:50.394 =====Discovery Log Entry 1====== 00:20:50.394 trtype: rdma 00:20:50.394 adrfam: ipv4 00:20:50.394 subtype: nvme subsystem 00:20:50.394 treq: not specified, sq flow control disable supported 00:20:50.394 portid: 1 00:20:50.394 trsvcid: 4420 00:20:50.394 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:50.394 traddr: 192.168.100.8 00:20:50.394 eflags: none 00:20:50.394 rdma_prtype: not specified 00:20:50.394 rdma_qptype: connected 00:20:50.394 rdma_cms: rdma-cm 00:20:50.394 rdma_pkey: 0x0000 00:20:50.394 04:12:04 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:20:50.394 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:50.394 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.394 ===================================================== 00:20:50.394 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:50.394 ===================================================== 00:20:50.394 Controller Capabilities/Features 00:20:50.394 ================================ 00:20:50.394 Vendor ID: 0000 00:20:50.394 Subsystem Vendor ID: 0000 00:20:50.394 Serial Number: 22d336add11c74cf6fa7 00:20:50.394 Model Number: Linux 00:20:50.394 Firmware Version: 6.7.0-68 00:20:50.394 Recommended Arb Burst: 0 00:20:50.394 IEEE OUI Identifier: 00 00 00 00:20:50.394 Multi-path I/O 00:20:50.394 May have multiple subsystem ports: No 00:20:50.394 May have multiple controllers: No 00:20:50.394 Associated with SR-IOV VF: No 00:20:50.394 Max Data Transfer Size: Unlimited 00:20:50.394 Max Number of Namespaces: 0 00:20:50.394 Max Number of I/O Queues: 1024 00:20:50.394 NVMe Specification Version (VS): 1.3 00:20:50.394 NVMe Specification Version (Identify): 1.3 00:20:50.394 Maximum Queue Entries: 128 00:20:50.394 Contiguous Queues Required: No 00:20:50.394 Arbitration Mechanisms Supported 00:20:50.394 Weighted Round Robin: Not Supported 00:20:50.394 Vendor Specific: Not Supported 00:20:50.394 Reset Timeout: 7500 ms 00:20:50.394 Doorbell Stride: 4 bytes 00:20:50.394 NVM Subsystem Reset: Not Supported 00:20:50.394 Command Sets Supported 00:20:50.394 NVM Command Set: Supported 00:20:50.394 Boot Partition: Not Supported 00:20:50.394 Memory Page Size Minimum: 4096 bytes 00:20:50.394 Memory Page Size Maximum: 4096 bytes 00:20:50.394 Persistent Memory Region: Not Supported 00:20:50.394 Optional Asynchronous Events Supported 00:20:50.394 Namespace Attribute Notices: Not Supported 00:20:50.394 Firmware Activation Notices: Not Supported 00:20:50.394 ANA Change Notices: Not Supported 00:20:50.394 PLE Aggregate Log Change Notices: Not Supported 00:20:50.394 LBA Status Info Alert Notices: Not Supported 00:20:50.394 EGE Aggregate Log Change Notices: Not Supported 00:20:50.394 Normal NVM Subsystem Shutdown event: Not Supported 00:20:50.394 Zone Descriptor Change Notices: Not Supported 00:20:50.394 Discovery Log Change Notices: Supported 00:20:50.394 Controller Attributes 00:20:50.394 128-bit Host Identifier: Not Supported 00:20:50.394 Non-Operational Permissive Mode: Not Supported 00:20:50.394 NVM Sets: Not Supported 00:20:50.394 Read Recovery Levels: Not Supported 00:20:50.394 Endurance Groups: Not Supported 00:20:50.394 Predictable Latency Mode: Not Supported 00:20:50.394 Traffic Based Keep ALive: Not Supported 00:20:50.394 Namespace Granularity: Not Supported 00:20:50.394 SQ Associations: Not Supported 00:20:50.394 UUID List: Not Supported 00:20:50.394 Multi-Domain Subsystem: Not Supported 00:20:50.394 Fixed Capacity Management: Not Supported 00:20:50.394 Variable Capacity Management: Not Supported 00:20:50.394 Delete Endurance Group: Not Supported 00:20:50.394 Delete NVM Set: Not Supported 00:20:50.394 Extended LBA Formats Supported: Not Supported 00:20:50.394 Flexible Data Placement Supported: Not Supported 00:20:50.394 00:20:50.394 Controller Memory Buffer Support 00:20:50.394 ================================ 00:20:50.394 Supported: No 00:20:50.394 00:20:50.394 Persistent Memory Region Support 00:20:50.394 ================================ 00:20:50.394 Supported: No 00:20:50.394 00:20:50.394 Admin Command Set Attributes 00:20:50.394 ============================ 00:20:50.394 Security Send/Receive: Not Supported 00:20:50.395 Format NVM: Not Supported 00:20:50.395 Firmware Activate/Download: Not Supported 00:20:50.395 Namespace Management: Not Supported 00:20:50.395 Device Self-Test: Not Supported 00:20:50.395 Directives: Not Supported 00:20:50.395 NVMe-MI: Not Supported 00:20:50.395 Virtualization Management: Not Supported 00:20:50.395 Doorbell Buffer Config: Not Supported 00:20:50.395 Get LBA Status Capability: Not Supported 00:20:50.395 Command & Feature Lockdown Capability: Not Supported 00:20:50.395 Abort Command Limit: 1 00:20:50.395 Async Event Request Limit: 1 00:20:50.395 Number of Firmware Slots: N/A 00:20:50.395 Firmware Slot 1 Read-Only: N/A 00:20:50.395 Firmware Activation Without Reset: N/A 00:20:50.395 Multiple Update Detection Support: N/A 00:20:50.395 Firmware Update Granularity: No Information Provided 00:20:50.395 Per-Namespace SMART Log: No 00:20:50.395 Asymmetric Namespace Access Log Page: Not Supported 00:20:50.395 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:50.395 Command Effects Log Page: Not Supported 00:20:50.395 Get Log Page Extended Data: Supported 00:20:50.395 Telemetry Log Pages: Not Supported 00:20:50.395 Persistent Event Log Pages: Not Supported 00:20:50.395 Supported Log Pages Log Page: May Support 00:20:50.395 Commands Supported & Effects Log Page: Not Supported 00:20:50.395 Feature Identifiers & Effects Log Page:May Support 00:20:50.395 NVMe-MI Commands & Effects Log Page: May Support 00:20:50.395 Data Area 4 for Telemetry Log: Not Supported 00:20:50.395 Error Log Page Entries Supported: 1 00:20:50.395 Keep Alive: Not Supported 00:20:50.395 00:20:50.395 NVM Command Set Attributes 00:20:50.395 ========================== 00:20:50.395 Submission Queue Entry Size 00:20:50.395 Max: 1 00:20:50.395 Min: 1 00:20:50.395 Completion Queue Entry Size 00:20:50.395 Max: 1 00:20:50.395 Min: 1 00:20:50.395 Number of Namespaces: 0 00:20:50.395 Compare Command: Not Supported 00:20:50.395 Write Uncorrectable Command: Not Supported 00:20:50.395 Dataset Management Command: Not Supported 00:20:50.395 Write Zeroes Command: Not Supported 00:20:50.395 Set Features Save Field: Not Supported 00:20:50.395 Reservations: Not Supported 00:20:50.395 Timestamp: Not Supported 00:20:50.395 Copy: Not Supported 00:20:50.395 Volatile Write Cache: Not Present 00:20:50.395 Atomic Write Unit (Normal): 1 00:20:50.395 Atomic Write Unit (PFail): 1 00:20:50.395 Atomic Compare & Write Unit: 1 00:20:50.395 Fused Compare & Write: Not Supported 00:20:50.395 Scatter-Gather List 00:20:50.395 SGL Command Set: Supported 00:20:50.395 SGL Keyed: Supported 00:20:50.395 SGL Bit Bucket Descriptor: Not Supported 00:20:50.395 SGL Metadata Pointer: Not Supported 00:20:50.395 Oversized SGL: Not Supported 00:20:50.395 SGL Metadata Address: Not Supported 00:20:50.395 SGL Offset: Supported 00:20:50.395 Transport SGL Data Block: Not Supported 00:20:50.395 Replay Protected Memory Block: Not Supported 00:20:50.395 00:20:50.395 Firmware Slot Information 00:20:50.395 ========================= 00:20:50.395 Active slot: 0 00:20:50.395 00:20:50.395 00:20:50.395 Error Log 00:20:50.395 ========= 00:20:50.395 00:20:50.395 Active Namespaces 00:20:50.395 ================= 00:20:50.395 Discovery Log Page 00:20:50.395 ================== 00:20:50.395 Generation Counter: 2 00:20:50.395 Number of Records: 2 00:20:50.395 Record Format: 0 00:20:50.395 00:20:50.395 Discovery Log Entry 0 00:20:50.395 ---------------------- 00:20:50.395 Transport Type: 1 (RDMA) 00:20:50.395 Address Family: 1 (IPv4) 00:20:50.395 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:50.395 Entry Flags: 00:20:50.395 Duplicate Returned Information: 0 00:20:50.395 Explicit Persistent Connection Support for Discovery: 0 00:20:50.395 Transport Requirements: 00:20:50.395 Secure Channel: Not Specified 00:20:50.395 Port ID: 1 (0x0001) 00:20:50.395 Controller ID: 65535 (0xffff) 00:20:50.395 Admin Max SQ Size: 32 00:20:50.395 Transport Service Identifier: 4420 00:20:50.395 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:50.395 Transport Address: 192.168.100.8 00:20:50.395 Transport Specific Address Subtype - RDMA 00:20:50.395 RDMA QP Service Type: 1 (Reliable Connected) 00:20:50.395 RDMA Provider Type: 1 (No provider specified) 00:20:50.395 RDMA CM Service: 1 (RDMA_CM) 00:20:50.395 Discovery Log Entry 1 00:20:50.395 ---------------------- 00:20:50.395 Transport Type: 1 (RDMA) 00:20:50.395 Address Family: 1 (IPv4) 00:20:50.395 Subsystem Type: 2 (NVM Subsystem) 00:20:50.395 Entry Flags: 00:20:50.395 Duplicate Returned Information: 0 00:20:50.395 Explicit Persistent Connection Support for Discovery: 0 00:20:50.395 Transport Requirements: 00:20:50.395 Secure Channel: Not Specified 00:20:50.395 Port ID: 1 (0x0001) 00:20:50.395 Controller ID: 65535 (0xffff) 00:20:50.395 Admin Max SQ Size: 32 00:20:50.395 Transport Service Identifier: 4420 00:20:50.395 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:50.395 Transport Address: 192.168.100.8 00:20:50.395 Transport Specific Address Subtype - RDMA 00:20:50.395 RDMA QP Service Type: 1 (Reliable Connected) 00:20:50.395 RDMA Provider Type: 1 (No provider specified) 00:20:50.395 RDMA CM Service: 1 (RDMA_CM) 00:20:50.395 04:12:04 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:50.395 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.654 get_feature(0x01) failed 00:20:50.654 get_feature(0x02) failed 00:20:50.654 get_feature(0x04) failed 00:20:50.654 ===================================================== 00:20:50.654 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:20:50.654 ===================================================== 00:20:50.654 Controller Capabilities/Features 00:20:50.654 ================================ 00:20:50.654 Vendor ID: 0000 00:20:50.654 Subsystem Vendor ID: 0000 00:20:50.654 Serial Number: fb90a16e6ff0a467d1e8 00:20:50.654 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:50.654 Firmware Version: 6.7.0-68 00:20:50.654 Recommended Arb Burst: 6 00:20:50.654 IEEE OUI Identifier: 00 00 00 00:20:50.654 Multi-path I/O 00:20:50.654 May have multiple subsystem ports: Yes 00:20:50.654 May have multiple controllers: Yes 00:20:50.654 Associated with SR-IOV VF: No 00:20:50.654 Max Data Transfer Size: 1048576 00:20:50.654 Max Number of Namespaces: 1024 00:20:50.654 Max Number of I/O Queues: 128 00:20:50.654 NVMe Specification Version (VS): 1.3 00:20:50.654 NVMe Specification Version (Identify): 1.3 00:20:50.654 Maximum Queue Entries: 128 00:20:50.654 Contiguous Queues Required: No 00:20:50.654 Arbitration Mechanisms Supported 00:20:50.654 Weighted Round Robin: Not Supported 00:20:50.654 Vendor Specific: Not Supported 00:20:50.654 Reset Timeout: 7500 ms 00:20:50.654 Doorbell Stride: 4 bytes 00:20:50.654 NVM Subsystem Reset: Not Supported 00:20:50.654 Command Sets Supported 00:20:50.654 NVM Command Set: Supported 00:20:50.654 Boot Partition: Not Supported 00:20:50.654 Memory Page Size Minimum: 4096 bytes 00:20:50.654 Memory Page Size Maximum: 4096 bytes 00:20:50.654 Persistent Memory Region: Not Supported 00:20:50.654 Optional Asynchronous Events Supported 00:20:50.654 Namespace Attribute Notices: Supported 00:20:50.654 Firmware Activation Notices: Not Supported 00:20:50.654 ANA Change Notices: Supported 00:20:50.654 PLE Aggregate Log Change Notices: Not Supported 00:20:50.654 LBA Status Info Alert Notices: Not Supported 00:20:50.654 EGE Aggregate Log Change Notices: Not Supported 00:20:50.654 Normal NVM Subsystem Shutdown event: Not Supported 00:20:50.654 Zone Descriptor Change Notices: Not Supported 00:20:50.654 Discovery Log Change Notices: Not Supported 00:20:50.654 Controller Attributes 00:20:50.654 128-bit Host Identifier: Supported 00:20:50.654 Non-Operational Permissive Mode: Not Supported 00:20:50.654 NVM Sets: Not Supported 00:20:50.654 Read Recovery Levels: Not Supported 00:20:50.654 Endurance Groups: Not Supported 00:20:50.654 Predictable Latency Mode: Not Supported 00:20:50.654 Traffic Based Keep ALive: Supported 00:20:50.654 Namespace Granularity: Not Supported 00:20:50.654 SQ Associations: Not Supported 00:20:50.654 UUID List: Not Supported 00:20:50.654 Multi-Domain Subsystem: Not Supported 00:20:50.654 Fixed Capacity Management: Not Supported 00:20:50.654 Variable Capacity Management: Not Supported 00:20:50.654 Delete Endurance Group: Not Supported 00:20:50.654 Delete NVM Set: Not Supported 00:20:50.654 Extended LBA Formats Supported: Not Supported 00:20:50.654 Flexible Data Placement Supported: Not Supported 00:20:50.654 00:20:50.654 Controller Memory Buffer Support 00:20:50.654 ================================ 00:20:50.654 Supported: No 00:20:50.654 00:20:50.654 Persistent Memory Region Support 00:20:50.654 ================================ 00:20:50.654 Supported: No 00:20:50.654 00:20:50.654 Admin Command Set Attributes 00:20:50.654 ============================ 00:20:50.654 Security Send/Receive: Not Supported 00:20:50.654 Format NVM: Not Supported 00:20:50.654 Firmware Activate/Download: Not Supported 00:20:50.654 Namespace Management: Not Supported 00:20:50.654 Device Self-Test: Not Supported 00:20:50.654 Directives: Not Supported 00:20:50.654 NVMe-MI: Not Supported 00:20:50.654 Virtualization Management: Not Supported 00:20:50.654 Doorbell Buffer Config: Not Supported 00:20:50.654 Get LBA Status Capability: Not Supported 00:20:50.654 Command & Feature Lockdown Capability: Not Supported 00:20:50.654 Abort Command Limit: 4 00:20:50.654 Async Event Request Limit: 4 00:20:50.654 Number of Firmware Slots: N/A 00:20:50.654 Firmware Slot 1 Read-Only: N/A 00:20:50.654 Firmware Activation Without Reset: N/A 00:20:50.654 Multiple Update Detection Support: N/A 00:20:50.654 Firmware Update Granularity: No Information Provided 00:20:50.654 Per-Namespace SMART Log: Yes 00:20:50.654 Asymmetric Namespace Access Log Page: Supported 00:20:50.654 ANA Transition Time : 10 sec 00:20:50.654 00:20:50.654 Asymmetric Namespace Access Capabilities 00:20:50.654 ANA Optimized State : Supported 00:20:50.654 ANA Non-Optimized State : Supported 00:20:50.654 ANA Inaccessible State : Supported 00:20:50.654 ANA Persistent Loss State : Supported 00:20:50.654 ANA Change State : Supported 00:20:50.654 ANAGRPID is not changed : No 00:20:50.654 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:50.654 00:20:50.654 ANA Group Identifier Maximum : 128 00:20:50.654 Number of ANA Group Identifiers : 128 00:20:50.654 Max Number of Allowed Namespaces : 1024 00:20:50.654 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:50.654 Command Effects Log Page: Supported 00:20:50.654 Get Log Page Extended Data: Supported 00:20:50.654 Telemetry Log Pages: Not Supported 00:20:50.654 Persistent Event Log Pages: Not Supported 00:20:50.654 Supported Log Pages Log Page: May Support 00:20:50.654 Commands Supported & Effects Log Page: Not Supported 00:20:50.654 Feature Identifiers & Effects Log Page:May Support 00:20:50.654 NVMe-MI Commands & Effects Log Page: May Support 00:20:50.655 Data Area 4 for Telemetry Log: Not Supported 00:20:50.655 Error Log Page Entries Supported: 128 00:20:50.655 Keep Alive: Supported 00:20:50.655 Keep Alive Granularity: 1000 ms 00:20:50.655 00:20:50.655 NVM Command Set Attributes 00:20:50.655 ========================== 00:20:50.655 Submission Queue Entry Size 00:20:50.655 Max: 64 00:20:50.655 Min: 64 00:20:50.655 Completion Queue Entry Size 00:20:50.655 Max: 16 00:20:50.655 Min: 16 00:20:50.655 Number of Namespaces: 1024 00:20:50.655 Compare Command: Not Supported 00:20:50.655 Write Uncorrectable Command: Not Supported 00:20:50.655 Dataset Management Command: Supported 00:20:50.655 Write Zeroes Command: Supported 00:20:50.655 Set Features Save Field: Not Supported 00:20:50.655 Reservations: Not Supported 00:20:50.655 Timestamp: Not Supported 00:20:50.655 Copy: Not Supported 00:20:50.655 Volatile Write Cache: Present 00:20:50.655 Atomic Write Unit (Normal): 1 00:20:50.655 Atomic Write Unit (PFail): 1 00:20:50.655 Atomic Compare & Write Unit: 1 00:20:50.655 Fused Compare & Write: Not Supported 00:20:50.655 Scatter-Gather List 00:20:50.655 SGL Command Set: Supported 00:20:50.655 SGL Keyed: Supported 00:20:50.655 SGL Bit Bucket Descriptor: Not Supported 00:20:50.655 SGL Metadata Pointer: Not Supported 00:20:50.655 Oversized SGL: Not Supported 00:20:50.655 SGL Metadata Address: Not Supported 00:20:50.655 SGL Offset: Supported 00:20:50.655 Transport SGL Data Block: Not Supported 00:20:50.655 Replay Protected Memory Block: Not Supported 00:20:50.655 00:20:50.655 Firmware Slot Information 00:20:50.655 ========================= 00:20:50.655 Active slot: 0 00:20:50.655 00:20:50.655 Asymmetric Namespace Access 00:20:50.655 =========================== 00:20:50.655 Change Count : 0 00:20:50.655 Number of ANA Group Descriptors : 1 00:20:50.655 ANA Group Descriptor : 0 00:20:50.655 ANA Group ID : 1 00:20:50.655 Number of NSID Values : 1 00:20:50.655 Change Count : 0 00:20:50.655 ANA State : 1 00:20:50.655 Namespace Identifier : 1 00:20:50.655 00:20:50.655 Commands Supported and Effects 00:20:50.655 ============================== 00:20:50.655 Admin Commands 00:20:50.655 -------------- 00:20:50.655 Get Log Page (02h): Supported 00:20:50.655 Identify (06h): Supported 00:20:50.655 Abort (08h): Supported 00:20:50.655 Set Features (09h): Supported 00:20:50.655 Get Features (0Ah): Supported 00:20:50.655 Asynchronous Event Request (0Ch): Supported 00:20:50.655 Keep Alive (18h): Supported 00:20:50.655 I/O Commands 00:20:50.655 ------------ 00:20:50.655 Flush (00h): Supported 00:20:50.655 Write (01h): Supported LBA-Change 00:20:50.655 Read (02h): Supported 00:20:50.655 Write Zeroes (08h): Supported LBA-Change 00:20:50.655 Dataset Management (09h): Supported 00:20:50.655 00:20:50.655 Error Log 00:20:50.655 ========= 00:20:50.655 Entry: 0 00:20:50.655 Error Count: 0x3 00:20:50.655 Submission Queue Id: 0x0 00:20:50.655 Command Id: 0x5 00:20:50.655 Phase Bit: 0 00:20:50.655 Status Code: 0x2 00:20:50.655 Status Code Type: 0x0 00:20:50.655 Do Not Retry: 1 00:20:50.655 Error Location: 0x28 00:20:50.655 LBA: 0x0 00:20:50.655 Namespace: 0x0 00:20:50.655 Vendor Log Page: 0x0 00:20:50.655 ----------- 00:20:50.655 Entry: 1 00:20:50.655 Error Count: 0x2 00:20:50.655 Submission Queue Id: 0x0 00:20:50.655 Command Id: 0x5 00:20:50.655 Phase Bit: 0 00:20:50.655 Status Code: 0x2 00:20:50.655 Status Code Type: 0x0 00:20:50.655 Do Not Retry: 1 00:20:50.655 Error Location: 0x28 00:20:50.655 LBA: 0x0 00:20:50.655 Namespace: 0x0 00:20:50.655 Vendor Log Page: 0x0 00:20:50.655 ----------- 00:20:50.655 Entry: 2 00:20:50.655 Error Count: 0x1 00:20:50.655 Submission Queue Id: 0x0 00:20:50.655 Command Id: 0x0 00:20:50.655 Phase Bit: 0 00:20:50.655 Status Code: 0x2 00:20:50.655 Status Code Type: 0x0 00:20:50.655 Do Not Retry: 1 00:20:50.655 Error Location: 0x28 00:20:50.655 LBA: 0x0 00:20:50.655 Namespace: 0x0 00:20:50.655 Vendor Log Page: 0x0 00:20:50.655 00:20:50.655 Number of Queues 00:20:50.655 ================ 00:20:50.655 Number of I/O Submission Queues: 128 00:20:50.655 Number of I/O Completion Queues: 128 00:20:50.655 00:20:50.655 ZNS Specific Controller Data 00:20:50.655 ============================ 00:20:50.655 Zone Append Size Limit: 0 00:20:50.655 00:20:50.655 00:20:50.655 Active Namespaces 00:20:50.655 ================= 00:20:50.655 get_feature(0x05) failed 00:20:50.655 Namespace ID:1 00:20:50.655 Command Set Identifier: NVM (00h) 00:20:50.655 Deallocate: Supported 00:20:50.655 Deallocated/Unwritten Error: Not Supported 00:20:50.655 Deallocated Read Value: Unknown 00:20:50.655 Deallocate in Write Zeroes: Not Supported 00:20:50.655 Deallocated Guard Field: 0xFFFF 00:20:50.655 Flush: Supported 00:20:50.655 Reservation: Not Supported 00:20:50.655 Namespace Sharing Capabilities: Multiple Controllers 00:20:50.655 Size (in LBAs): 7814037168 (3726GiB) 00:20:50.655 Capacity (in LBAs): 7814037168 (3726GiB) 00:20:50.655 Utilization (in LBAs): 7814037168 (3726GiB) 00:20:50.655 UUID: b751f3a2-3213-411b-babf-941a0145ebab 00:20:50.655 Thin Provisioning: Not Supported 00:20:50.655 Per-NS Atomic Units: Yes 00:20:50.655 Atomic Boundary Size (Normal): 0 00:20:50.655 Atomic Boundary Size (PFail): 0 00:20:50.655 Atomic Boundary Offset: 0 00:20:50.655 NGUID/EUI64 Never Reused: No 00:20:50.655 ANA group ID: 1 00:20:50.655 Namespace Write Protected: No 00:20:50.655 Number of LBA Formats: 1 00:20:50.655 Current LBA Format: LBA Format #00 00:20:50.655 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:50.655 00:20:50.655 04:12:04 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:50.655 04:12:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:50.655 04:12:04 -- nvmf/common.sh@117 -- # sync 00:20:50.655 04:12:04 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:50.655 04:12:04 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:50.655 04:12:04 -- nvmf/common.sh@120 -- # set +e 00:20:50.655 04:12:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.655 04:12:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:50.655 rmmod nvme_rdma 00:20:50.655 rmmod nvme_fabrics 00:20:50.655 04:12:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.655 04:12:05 -- nvmf/common.sh@124 -- # set -e 00:20:50.655 04:12:05 -- nvmf/common.sh@125 -- # return 0 00:20:50.655 04:12:05 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:50.655 04:12:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:50.655 04:12:05 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:50.655 04:12:05 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:50.655 04:12:05 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:50.655 04:12:05 -- nvmf/common.sh@675 -- # echo 0 00:20:50.655 04:12:05 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:50.655 04:12:05 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:50.655 04:12:05 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:50.656 04:12:05 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:50.656 04:12:05 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:50.656 04:12:05 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:20:50.656 04:12:05 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:53.188 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:53.188 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:53.188 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:53.188 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:53.188 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:53.188 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:53.188 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:53.188 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:53.448 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:53.448 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:53.448 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:53.448 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:53.448 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:53.448 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:53.448 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:53.448 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:56.737 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:20:58.112 00:20:58.112 real 0m18.748s 00:20:58.112 user 0m4.848s 00:20:58.112 sys 0m9.723s 00:20:58.112 04:12:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:58.112 04:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:58.112 ************************************ 00:20:58.112 END TEST nvmf_identify_kernel_target 00:20:58.112 ************************************ 00:20:58.112 04:12:12 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:20:58.112 04:12:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:58.112 04:12:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:58.112 04:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:58.112 ************************************ 00:20:58.112 START TEST nvmf_auth 00:20:58.112 ************************************ 00:20:58.112 04:12:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:20:58.112 * Looking for test storage... 00:20:58.112 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:58.112 04:12:12 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.112 04:12:12 -- nvmf/common.sh@7 -- # uname -s 00:20:58.112 04:12:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.112 04:12:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.112 04:12:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.112 04:12:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.112 04:12:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.112 04:12:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.112 04:12:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.112 04:12:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.112 04:12:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.112 04:12:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.112 04:12:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:58.112 04:12:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:58.112 04:12:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.112 04:12:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.112 04:12:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.112 04:12:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.112 04:12:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:58.456 04:12:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.456 04:12:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.456 04:12:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.456 04:12:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.457 04:12:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.457 04:12:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.457 04:12:12 -- paths/export.sh@5 -- # export PATH 00:20:58.457 04:12:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.457 04:12:12 -- nvmf/common.sh@47 -- # : 0 00:20:58.457 04:12:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:58.457 04:12:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:58.457 04:12:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.457 04:12:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.457 04:12:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.457 04:12:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:58.457 04:12:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:58.457 04:12:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:58.457 04:12:12 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:58.457 04:12:12 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:58.457 04:12:12 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:58.457 04:12:12 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:58.457 04:12:12 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:58.457 04:12:12 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:58.457 04:12:12 -- host/auth.sh@21 -- # keys=() 00:20:58.457 04:12:12 -- host/auth.sh@77 -- # nvmftestinit 00:20:58.457 04:12:12 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:58.457 04:12:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.457 04:12:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:58.457 04:12:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:58.457 04:12:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:58.457 04:12:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.457 04:12:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.457 04:12:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.457 04:12:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:58.457 04:12:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:58.457 04:12:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:58.457 04:12:12 -- common/autotest_common.sh@10 -- # set +x 00:21:03.725 04:12:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:03.725 04:12:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:03.725 04:12:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:03.725 04:12:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:03.725 04:12:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:03.725 04:12:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:03.725 04:12:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:03.725 04:12:18 -- nvmf/common.sh@295 -- # net_devs=() 00:21:03.725 04:12:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:03.725 04:12:18 -- nvmf/common.sh@296 -- # e810=() 00:21:03.725 04:12:18 -- nvmf/common.sh@296 -- # local -ga e810 00:21:03.725 04:12:18 -- nvmf/common.sh@297 -- # x722=() 00:21:03.725 04:12:18 -- nvmf/common.sh@297 -- # local -ga x722 00:21:03.725 04:12:18 -- nvmf/common.sh@298 -- # mlx=() 00:21:03.725 04:12:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:03.725 04:12:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.725 04:12:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.726 04:12:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.726 04:12:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.726 04:12:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.726 04:12:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.726 04:12:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.726 04:12:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.726 04:12:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.726 04:12:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.726 04:12:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.726 04:12:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:03.726 04:12:18 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:03.726 04:12:18 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:03.726 04:12:18 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:03.726 04:12:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:03.726 04:12:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:03.726 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:03.726 04:12:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:03.726 04:12:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:03.726 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:03.726 04:12:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:03.726 04:12:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:03.726 04:12:18 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.726 04:12:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:03.726 04:12:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.726 04:12:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:03.726 Found net devices under 0000:18:00.0: mlx_0_0 00:21:03.726 04:12:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.726 04:12:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.726 04:12:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:03.726 04:12:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.726 04:12:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:03.726 Found net devices under 0000:18:00.1: mlx_0_1 00:21:03.726 04:12:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.726 04:12:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:03.726 04:12:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:03.726 04:12:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:03.726 04:12:18 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:03.726 04:12:18 -- nvmf/common.sh@58 -- # uname 00:21:03.726 04:12:18 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:03.726 04:12:18 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:03.726 04:12:18 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:03.726 04:12:18 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:03.726 04:12:18 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:03.726 04:12:18 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:03.726 04:12:18 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:03.726 04:12:18 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:03.726 04:12:18 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:03.726 04:12:18 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:03.726 04:12:18 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:03.726 04:12:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:03.726 04:12:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:03.726 04:12:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:03.726 04:12:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:03.726 04:12:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:03.726 04:12:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:03.726 04:12:18 -- nvmf/common.sh@105 -- # continue 2 00:21:03.726 04:12:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:03.726 04:12:18 -- nvmf/common.sh@105 -- # continue 2 00:21:03.726 04:12:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:03.726 04:12:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:03.726 04:12:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:03.726 04:12:18 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:03.726 04:12:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:03.726 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:03.726 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:21:03.726 altname enp24s0f0np0 00:21:03.726 altname ens785f0np0 00:21:03.726 inet 192.168.100.8/24 scope global mlx_0_0 00:21:03.726 valid_lft forever preferred_lft forever 00:21:03.726 04:12:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:03.726 04:12:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:03.726 04:12:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:03.726 04:12:18 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:03.726 04:12:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:03.726 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:03.726 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:21:03.726 altname enp24s0f1np1 00:21:03.726 altname ens785f1np1 00:21:03.726 inet 192.168.100.9/24 scope global mlx_0_1 00:21:03.726 valid_lft forever preferred_lft forever 00:21:03.726 04:12:18 -- nvmf/common.sh@411 -- # return 0 00:21:03.726 04:12:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:03.726 04:12:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:03.726 04:12:18 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:03.726 04:12:18 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:03.726 04:12:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:03.726 04:12:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:03.726 04:12:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:03.726 04:12:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:03.726 04:12:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:03.726 04:12:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:03.726 04:12:18 -- nvmf/common.sh@105 -- # continue 2 00:21:03.726 04:12:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:03.726 04:12:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:03.726 04:12:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:03.726 04:12:18 -- nvmf/common.sh@105 -- # continue 2 00:21:03.726 04:12:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:03.726 04:12:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:03.726 04:12:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:03.726 04:12:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:03.726 04:12:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:03.726 04:12:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:03.726 04:12:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:03.726 04:12:18 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:03.726 192.168.100.9' 00:21:03.726 04:12:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:03.726 192.168.100.9' 00:21:03.726 04:12:18 -- nvmf/common.sh@446 -- # head -n 1 00:21:03.726 04:12:18 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:03.726 04:12:18 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:03.726 192.168.100.9' 00:21:03.726 04:12:18 -- nvmf/common.sh@447 -- # tail -n +2 00:21:03.726 04:12:18 -- nvmf/common.sh@447 -- # head -n 1 00:21:03.726 04:12:18 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:03.727 04:12:18 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:03.727 04:12:18 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:03.727 04:12:18 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:03.727 04:12:18 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:03.727 04:12:18 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:03.727 04:12:18 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:21:03.727 04:12:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:03.727 04:12:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:03.727 04:12:18 -- common/autotest_common.sh@10 -- # set +x 00:21:03.727 04:12:18 -- nvmf/common.sh@470 -- # nvmfpid=391687 00:21:03.727 04:12:18 -- nvmf/common.sh@471 -- # waitforlisten 391687 00:21:03.727 04:12:18 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:03.727 04:12:18 -- common/autotest_common.sh@817 -- # '[' -z 391687 ']' 00:21:03.727 04:12:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.727 04:12:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:03.727 04:12:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.727 04:12:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:03.727 04:12:18 -- common/autotest_common.sh@10 -- # set +x 00:21:04.660 04:12:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:04.660 04:12:19 -- common/autotest_common.sh@850 -- # return 0 00:21:04.660 04:12:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:04.660 04:12:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:04.660 04:12:19 -- common/autotest_common.sh@10 -- # set +x 00:21:04.660 04:12:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.660 04:12:19 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:04.660 04:12:19 -- host/auth.sh@81 -- # gen_key null 32 00:21:04.660 04:12:19 -- host/auth.sh@53 -- # local digest len file key 00:21:04.660 04:12:19 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:04.660 04:12:19 -- host/auth.sh@54 -- # local -A digests 00:21:04.660 04:12:19 -- host/auth.sh@56 -- # digest=null 00:21:04.660 04:12:19 -- host/auth.sh@56 -- # len=32 00:21:04.660 04:12:19 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:04.660 04:12:19 -- host/auth.sh@57 -- # key=39a98bb4bc377de3b400adf67e519976 00:21:04.660 04:12:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:21:04.661 04:12:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.oXJ 00:21:04.661 04:12:19 -- host/auth.sh@59 -- # format_dhchap_key 39a98bb4bc377de3b400adf67e519976 0 00:21:04.661 04:12:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 39a98bb4bc377de3b400adf67e519976 0 00:21:04.661 04:12:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:04.661 04:12:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:21:04.661 04:12:19 -- nvmf/common.sh@693 -- # key=39a98bb4bc377de3b400adf67e519976 00:21:04.661 04:12:19 -- nvmf/common.sh@693 -- # digest=0 00:21:04.661 04:12:19 -- nvmf/common.sh@694 -- # python - 00:21:04.661 04:12:19 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.oXJ 00:21:04.661 04:12:19 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.oXJ 00:21:04.661 04:12:19 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.oXJ 00:21:04.661 04:12:19 -- host/auth.sh@82 -- # gen_key null 48 00:21:04.661 04:12:19 -- host/auth.sh@53 -- # local digest len file key 00:21:04.661 04:12:19 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:04.661 04:12:19 -- host/auth.sh@54 -- # local -A digests 00:21:04.661 04:12:19 -- host/auth.sh@56 -- # digest=null 00:21:04.661 04:12:19 -- host/auth.sh@56 -- # len=48 00:21:04.661 04:12:19 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:04.661 04:12:19 -- host/auth.sh@57 -- # key=309ca48ac780290cc85aba44f45101dbb13d5bffcb1f5a41 00:21:04.661 04:12:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:21:04.661 04:12:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.j1A 00:21:04.661 04:12:19 -- host/auth.sh@59 -- # format_dhchap_key 309ca48ac780290cc85aba44f45101dbb13d5bffcb1f5a41 0 00:21:04.661 04:12:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 309ca48ac780290cc85aba44f45101dbb13d5bffcb1f5a41 0 00:21:04.661 04:12:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:04.661 04:12:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:21:04.661 04:12:19 -- nvmf/common.sh@693 -- # key=309ca48ac780290cc85aba44f45101dbb13d5bffcb1f5a41 00:21:04.661 04:12:19 -- nvmf/common.sh@693 -- # digest=0 00:21:04.661 04:12:19 -- nvmf/common.sh@694 -- # python - 00:21:04.661 04:12:19 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.j1A 00:21:04.661 04:12:19 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.j1A 00:21:04.661 04:12:19 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.j1A 00:21:04.661 04:12:19 -- host/auth.sh@83 -- # gen_key sha256 32 00:21:04.661 04:12:19 -- host/auth.sh@53 -- # local digest len file key 00:21:04.661 04:12:19 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:04.661 04:12:19 -- host/auth.sh@54 -- # local -A digests 00:21:04.661 04:12:19 -- host/auth.sh@56 -- # digest=sha256 00:21:04.661 04:12:19 -- host/auth.sh@56 -- # len=32 00:21:04.661 04:12:19 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:04.661 04:12:19 -- host/auth.sh@57 -- # key=bf5ff329fa69f0d94b04fee99e2bad80 00:21:04.661 04:12:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:21:04.918 04:12:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.4Kx 00:21:04.918 04:12:19 -- host/auth.sh@59 -- # format_dhchap_key bf5ff329fa69f0d94b04fee99e2bad80 1 00:21:04.918 04:12:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 bf5ff329fa69f0d94b04fee99e2bad80 1 00:21:04.918 04:12:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:04.918 04:12:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:21:04.918 04:12:19 -- nvmf/common.sh@693 -- # key=bf5ff329fa69f0d94b04fee99e2bad80 00:21:04.918 04:12:19 -- nvmf/common.sh@693 -- # digest=1 00:21:04.918 04:12:19 -- nvmf/common.sh@694 -- # python - 00:21:04.918 04:12:19 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.4Kx 00:21:04.918 04:12:19 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.4Kx 00:21:04.918 04:12:19 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.4Kx 00:21:04.919 04:12:19 -- host/auth.sh@84 -- # gen_key sha384 48 00:21:04.919 04:12:19 -- host/auth.sh@53 -- # local digest len file key 00:21:04.919 04:12:19 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:04.919 04:12:19 -- host/auth.sh@54 -- # local -A digests 00:21:04.919 04:12:19 -- host/auth.sh@56 -- # digest=sha384 00:21:04.919 04:12:19 -- host/auth.sh@56 -- # len=48 00:21:04.919 04:12:19 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:04.919 04:12:19 -- host/auth.sh@57 -- # key=22234abd9de36f76fa78cb17af8014f1cd290db735b81810 00:21:04.919 04:12:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:21:04.919 04:12:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.euK 00:21:04.919 04:12:19 -- host/auth.sh@59 -- # format_dhchap_key 22234abd9de36f76fa78cb17af8014f1cd290db735b81810 2 00:21:04.919 04:12:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 22234abd9de36f76fa78cb17af8014f1cd290db735b81810 2 00:21:04.919 04:12:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:04.919 04:12:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:21:04.919 04:12:19 -- nvmf/common.sh@693 -- # key=22234abd9de36f76fa78cb17af8014f1cd290db735b81810 00:21:04.919 04:12:19 -- nvmf/common.sh@693 -- # digest=2 00:21:04.919 04:12:19 -- nvmf/common.sh@694 -- # python - 00:21:04.919 04:12:19 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.euK 00:21:04.919 04:12:19 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.euK 00:21:04.919 04:12:19 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.euK 00:21:04.919 04:12:19 -- host/auth.sh@85 -- # gen_key sha512 64 00:21:04.919 04:12:19 -- host/auth.sh@53 -- # local digest len file key 00:21:04.919 04:12:19 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:04.919 04:12:19 -- host/auth.sh@54 -- # local -A digests 00:21:04.919 04:12:19 -- host/auth.sh@56 -- # digest=sha512 00:21:04.919 04:12:19 -- host/auth.sh@56 -- # len=64 00:21:04.919 04:12:19 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:04.919 04:12:19 -- host/auth.sh@57 -- # key=2da3f6690d33cd2c43c8971d9f98a541126d5a8975787a96c594529dddccd2fe 00:21:04.919 04:12:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:21:04.919 04:12:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.6Ll 00:21:04.919 04:12:19 -- host/auth.sh@59 -- # format_dhchap_key 2da3f6690d33cd2c43c8971d9f98a541126d5a8975787a96c594529dddccd2fe 3 00:21:04.919 04:12:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 2da3f6690d33cd2c43c8971d9f98a541126d5a8975787a96c594529dddccd2fe 3 00:21:04.919 04:12:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:04.919 04:12:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:21:04.919 04:12:19 -- nvmf/common.sh@693 -- # key=2da3f6690d33cd2c43c8971d9f98a541126d5a8975787a96c594529dddccd2fe 00:21:04.919 04:12:19 -- nvmf/common.sh@693 -- # digest=3 00:21:04.919 04:12:19 -- nvmf/common.sh@694 -- # python - 00:21:04.919 04:12:19 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.6Ll 00:21:04.919 04:12:19 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.6Ll 00:21:04.919 04:12:19 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.6Ll 00:21:04.919 04:12:19 -- host/auth.sh@87 -- # waitforlisten 391687 00:21:04.919 04:12:19 -- common/autotest_common.sh@817 -- # '[' -z 391687 ']' 00:21:04.919 04:12:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.919 04:12:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:04.919 04:12:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.919 04:12:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:04.919 04:12:19 -- common/autotest_common.sh@10 -- # set +x 00:21:05.177 04:12:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:05.177 04:12:19 -- common/autotest_common.sh@850 -- # return 0 00:21:05.177 04:12:19 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:21:05.177 04:12:19 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oXJ 00:21:05.177 04:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.177 04:12:19 -- common/autotest_common.sh@10 -- # set +x 00:21:05.177 04:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.177 04:12:19 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:21:05.177 04:12:19 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.j1A 00:21:05.177 04:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.177 04:12:19 -- common/autotest_common.sh@10 -- # set +x 00:21:05.177 04:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.177 04:12:19 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:21:05.177 04:12:19 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.4Kx 00:21:05.177 04:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.177 04:12:19 -- common/autotest_common.sh@10 -- # set +x 00:21:05.177 04:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.177 04:12:19 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:21:05.177 04:12:19 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.euK 00:21:05.177 04:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.177 04:12:19 -- common/autotest_common.sh@10 -- # set +x 00:21:05.177 04:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.177 04:12:19 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:21:05.177 04:12:19 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6Ll 00:21:05.177 04:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.177 04:12:19 -- common/autotest_common.sh@10 -- # set +x 00:21:05.177 04:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.177 04:12:19 -- host/auth.sh@92 -- # nvmet_auth_init 00:21:05.177 04:12:19 -- host/auth.sh@35 -- # get_main_ns_ip 00:21:05.177 04:12:19 -- nvmf/common.sh@717 -- # local ip 00:21:05.177 04:12:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:05.177 04:12:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:05.177 04:12:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.177 04:12:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.177 04:12:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:05.177 04:12:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:05.177 04:12:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:05.177 04:12:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:05.177 04:12:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:05.177 04:12:19 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:21:05.177 04:12:19 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:21:05.177 04:12:19 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:21:05.177 04:12:19 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:05.177 04:12:19 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:05.177 04:12:19 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:05.177 04:12:19 -- nvmf/common.sh@628 -- # local block nvme 00:21:05.177 04:12:19 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:21:05.177 04:12:19 -- nvmf/common.sh@631 -- # modprobe nvmet 00:21:05.177 04:12:19 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:05.177 04:12:19 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:21:07.703 Waiting for block devices as requested 00:21:07.703 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:07.703 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:07.960 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:07.960 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:07.960 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:07.960 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:08.216 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:08.216 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:08.216 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:08.473 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:08.473 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:08.473 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:08.473 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:08.731 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:08.731 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:08.731 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:08.988 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:21:10.885 04:12:25 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:21:10.885 04:12:25 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:10.885 04:12:25 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:21:10.885 04:12:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:10.885 04:12:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:10.885 04:12:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:10.885 04:12:25 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:21:10.885 04:12:25 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:10.885 04:12:25 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:21:10.885 No valid GPT data, bailing 00:21:10.885 04:12:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:10.885 04:12:25 -- scripts/common.sh@391 -- # pt= 00:21:10.885 04:12:25 -- scripts/common.sh@392 -- # return 1 00:21:10.885 04:12:25 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:21:10.885 04:12:25 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:21:10.885 04:12:25 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:10.885 04:12:25 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:10.885 04:12:25 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:10.885 04:12:25 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:10.885 04:12:25 -- nvmf/common.sh@656 -- # echo 1 00:21:10.885 04:12:25 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:21:10.885 04:12:25 -- nvmf/common.sh@658 -- # echo 1 00:21:10.885 04:12:25 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:21:10.885 04:12:25 -- nvmf/common.sh@661 -- # echo rdma 00:21:10.885 04:12:25 -- nvmf/common.sh@662 -- # echo 4420 00:21:10.885 04:12:25 -- nvmf/common.sh@663 -- # echo ipv4 00:21:10.885 04:12:25 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:10.885 04:12:25 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:21:10.885 00:21:10.885 Discovery Log Number of Records 2, Generation counter 2 00:21:10.885 =====Discovery Log Entry 0====== 00:21:10.885 trtype: rdma 00:21:10.885 adrfam: ipv4 00:21:10.885 subtype: current discovery subsystem 00:21:10.885 treq: not specified, sq flow control disable supported 00:21:10.885 portid: 1 00:21:10.885 trsvcid: 4420 00:21:10.885 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:10.885 traddr: 192.168.100.8 00:21:10.885 eflags: none 00:21:10.885 rdma_prtype: not specified 00:21:10.885 rdma_qptype: connected 00:21:10.885 rdma_cms: rdma-cm 00:21:10.885 rdma_pkey: 0x0000 00:21:10.885 =====Discovery Log Entry 1====== 00:21:10.885 trtype: rdma 00:21:10.885 adrfam: ipv4 00:21:10.885 subtype: nvme subsystem 00:21:10.885 treq: not specified, sq flow control disable supported 00:21:10.885 portid: 1 00:21:10.885 trsvcid: 4420 00:21:10.885 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:10.885 traddr: 192.168.100.8 00:21:10.885 eflags: none 00:21:10.885 rdma_prtype: not specified 00:21:10.885 rdma_qptype: connected 00:21:10.885 rdma_cms: rdma-cm 00:21:10.885 rdma_pkey: 0x0000 00:21:10.885 04:12:25 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:10.885 04:12:25 -- host/auth.sh@37 -- # echo 0 00:21:10.885 04:12:25 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:10.885 04:12:25 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:10.885 04:12:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:10.885 04:12:25 -- host/auth.sh@44 -- # digest=sha256 00:21:10.885 04:12:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:10.886 04:12:25 -- host/auth.sh@44 -- # keyid=1 00:21:10.886 04:12:25 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:10.886 04:12:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:10.886 04:12:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:11.144 04:12:25 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:11.144 04:12:25 -- host/auth.sh@100 -- # IFS=, 00:21:11.144 04:12:25 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:21:11.144 04:12:25 -- host/auth.sh@100 -- # IFS=, 00:21:11.144 04:12:25 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.144 04:12:25 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:11.144 04:12:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:11.144 04:12:25 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:21:11.144 04:12:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.144 04:12:25 -- host/auth.sh@68 -- # keyid=1 00:21:11.144 04:12:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.144 04:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.144 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:11.144 04:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.144 04:12:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:11.144 04:12:25 -- nvmf/common.sh@717 -- # local ip 00:21:11.144 04:12:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:11.144 04:12:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:11.144 04:12:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.144 04:12:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.144 04:12:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:11.144 04:12:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:11.144 04:12:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:11.144 04:12:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:11.144 04:12:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:11.144 04:12:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:11.144 04:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.144 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:11.144 nvme0n1 00:21:11.144 04:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.144 04:12:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.144 04:12:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:11.144 04:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.144 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:11.144 04:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.402 04:12:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.402 04:12:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.402 04:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.402 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:11.402 04:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.402 04:12:25 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:21:11.402 04:12:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.402 04:12:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:11.402 04:12:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:11.402 04:12:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:11.402 04:12:25 -- host/auth.sh@44 -- # digest=sha256 00:21:11.402 04:12:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:11.402 04:12:25 -- host/auth.sh@44 -- # keyid=0 00:21:11.402 04:12:25 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:11.402 04:12:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:11.402 04:12:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:11.402 04:12:25 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:11.402 04:12:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:21:11.402 04:12:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:11.402 04:12:25 -- host/auth.sh@68 -- # digest=sha256 00:21:11.402 04:12:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:11.402 04:12:25 -- host/auth.sh@68 -- # keyid=0 00:21:11.402 04:12:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:11.402 04:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.402 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:11.402 04:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.402 04:12:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:11.402 04:12:25 -- nvmf/common.sh@717 -- # local ip 00:21:11.402 04:12:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:11.402 04:12:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:11.402 04:12:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.402 04:12:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.402 04:12:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:11.402 04:12:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:11.402 04:12:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:11.402 04:12:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:11.402 04:12:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:11.402 04:12:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:11.402 04:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.402 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:11.402 nvme0n1 00:21:11.402 04:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.402 04:12:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.402 04:12:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:11.402 04:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.402 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:11.402 04:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.661 04:12:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.661 04:12:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.661 04:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.661 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:11.661 04:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.661 04:12:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:11.661 04:12:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:11.661 04:12:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:11.661 04:12:25 -- host/auth.sh@44 -- # digest=sha256 00:21:11.661 04:12:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:11.661 04:12:25 -- host/auth.sh@44 -- # keyid=1 00:21:11.661 04:12:25 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:11.661 04:12:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:11.661 04:12:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:11.661 04:12:25 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:11.661 04:12:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:21:11.661 04:12:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:11.661 04:12:25 -- host/auth.sh@68 -- # digest=sha256 00:21:11.661 04:12:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:11.661 04:12:25 -- host/auth.sh@68 -- # keyid=1 00:21:11.661 04:12:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:11.661 04:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.661 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:11.661 04:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.661 04:12:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:11.661 04:12:25 -- nvmf/common.sh@717 -- # local ip 00:21:11.661 04:12:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:11.661 04:12:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:11.661 04:12:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.661 04:12:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.661 04:12:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:11.661 04:12:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:11.661 04:12:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:11.661 04:12:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:11.661 04:12:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:11.661 04:12:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:11.661 04:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.661 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:21:11.661 nvme0n1 00:21:11.661 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.661 04:12:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.661 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.661 04:12:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:11.661 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:11.661 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.920 04:12:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.920 04:12:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.920 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.920 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:11.920 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.920 04:12:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:11.920 04:12:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:11.920 04:12:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:11.920 04:12:26 -- host/auth.sh@44 -- # digest=sha256 00:21:11.920 04:12:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:11.920 04:12:26 -- host/auth.sh@44 -- # keyid=2 00:21:11.920 04:12:26 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:11.920 04:12:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:11.920 04:12:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:11.920 04:12:26 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:11.920 04:12:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:21:11.920 04:12:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:11.920 04:12:26 -- host/auth.sh@68 -- # digest=sha256 00:21:11.920 04:12:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:11.920 04:12:26 -- host/auth.sh@68 -- # keyid=2 00:21:11.920 04:12:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:11.920 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.920 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:11.920 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.920 04:12:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:11.920 04:12:26 -- nvmf/common.sh@717 -- # local ip 00:21:11.920 04:12:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:11.920 04:12:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:11.920 04:12:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.920 04:12:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.920 04:12:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:11.920 04:12:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:11.920 04:12:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:11.920 04:12:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:11.920 04:12:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:11.920 04:12:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:11.920 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.920 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:11.920 nvme0n1 00:21:11.920 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.920 04:12:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.920 04:12:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:11.920 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.920 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:11.920 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.180 04:12:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.180 04:12:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.180 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.180 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.180 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.180 04:12:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:12.180 04:12:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:12.180 04:12:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:12.180 04:12:26 -- host/auth.sh@44 -- # digest=sha256 00:21:12.180 04:12:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:12.180 04:12:26 -- host/auth.sh@44 -- # keyid=3 00:21:12.180 04:12:26 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:12.180 04:12:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:12.180 04:12:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:12.180 04:12:26 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:12.180 04:12:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:21:12.180 04:12:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:12.180 04:12:26 -- host/auth.sh@68 -- # digest=sha256 00:21:12.180 04:12:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:12.180 04:12:26 -- host/auth.sh@68 -- # keyid=3 00:21:12.180 04:12:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:12.180 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.180 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.180 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.180 04:12:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:12.180 04:12:26 -- nvmf/common.sh@717 -- # local ip 00:21:12.180 04:12:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:12.180 04:12:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:12.180 04:12:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.180 04:12:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.180 04:12:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:12.180 04:12:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.180 04:12:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.180 04:12:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:12.180 04:12:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:12.180 04:12:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:12.180 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.180 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.180 nvme0n1 00:21:12.180 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.180 04:12:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.180 04:12:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:12.180 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.180 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.438 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.438 04:12:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.438 04:12:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.438 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.438 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.438 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.438 04:12:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:12.438 04:12:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:12.438 04:12:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:12.438 04:12:26 -- host/auth.sh@44 -- # digest=sha256 00:21:12.438 04:12:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:12.438 04:12:26 -- host/auth.sh@44 -- # keyid=4 00:21:12.438 04:12:26 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:12.438 04:12:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:12.438 04:12:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:12.438 04:12:26 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:12.438 04:12:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:21:12.438 04:12:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:12.438 04:12:26 -- host/auth.sh@68 -- # digest=sha256 00:21:12.438 04:12:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:12.438 04:12:26 -- host/auth.sh@68 -- # keyid=4 00:21:12.438 04:12:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:12.438 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.438 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.438 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.438 04:12:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:12.438 04:12:26 -- nvmf/common.sh@717 -- # local ip 00:21:12.438 04:12:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:12.438 04:12:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:12.438 04:12:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.438 04:12:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.438 04:12:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:12.438 04:12:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.438 04:12:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.438 04:12:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:12.438 04:12:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:12.438 04:12:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:12.438 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.438 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.697 nvme0n1 00:21:12.697 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.697 04:12:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.697 04:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.697 04:12:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:12.697 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:21:12.697 04:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.697 04:12:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.697 04:12:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.697 04:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.697 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:12.697 04:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.697 04:12:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.697 04:12:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:12.697 04:12:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:12.697 04:12:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:12.697 04:12:27 -- host/auth.sh@44 -- # digest=sha256 00:21:12.697 04:12:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:12.697 04:12:27 -- host/auth.sh@44 -- # keyid=0 00:21:12.697 04:12:27 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:12.697 04:12:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:12.697 04:12:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:12.956 04:12:27 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:12.956 04:12:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:21:12.956 04:12:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:12.956 04:12:27 -- host/auth.sh@68 -- # digest=sha256 00:21:12.956 04:12:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:12.956 04:12:27 -- host/auth.sh@68 -- # keyid=0 00:21:12.956 04:12:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:12.956 04:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.956 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:12.956 04:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.956 04:12:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:12.956 04:12:27 -- nvmf/common.sh@717 -- # local ip 00:21:12.956 04:12:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:12.956 04:12:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:12.956 04:12:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.956 04:12:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.956 04:12:27 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:12.956 04:12:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.956 04:12:27 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.956 04:12:27 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:12.956 04:12:27 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:12.956 04:12:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:12.956 04:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.956 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:12.956 nvme0n1 00:21:12.956 04:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.956 04:12:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.956 04:12:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:12.956 04:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.956 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:12.956 04:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.214 04:12:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.214 04:12:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.214 04:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.214 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:13.214 04:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.214 04:12:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:13.214 04:12:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:13.214 04:12:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:13.214 04:12:27 -- host/auth.sh@44 -- # digest=sha256 00:21:13.214 04:12:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:13.214 04:12:27 -- host/auth.sh@44 -- # keyid=1 00:21:13.214 04:12:27 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:13.214 04:12:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:13.214 04:12:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:13.214 04:12:27 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:13.214 04:12:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:21:13.214 04:12:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:13.214 04:12:27 -- host/auth.sh@68 -- # digest=sha256 00:21:13.214 04:12:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:13.214 04:12:27 -- host/auth.sh@68 -- # keyid=1 00:21:13.214 04:12:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.214 04:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.214 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:13.214 04:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.214 04:12:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:13.214 04:12:27 -- nvmf/common.sh@717 -- # local ip 00:21:13.214 04:12:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:13.214 04:12:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:13.214 04:12:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.214 04:12:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.214 04:12:27 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:13.214 04:12:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:13.214 04:12:27 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:13.215 04:12:27 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:13.215 04:12:27 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:13.215 04:12:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:13.215 04:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.215 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:13.215 nvme0n1 00:21:13.215 04:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.215 04:12:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.215 04:12:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:13.215 04:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.215 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:13.215 04:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.472 04:12:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.472 04:12:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.472 04:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.472 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:13.472 04:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.472 04:12:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:13.472 04:12:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:13.472 04:12:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:13.472 04:12:27 -- host/auth.sh@44 -- # digest=sha256 00:21:13.472 04:12:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:13.472 04:12:27 -- host/auth.sh@44 -- # keyid=2 00:21:13.472 04:12:27 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:13.472 04:12:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:13.472 04:12:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:13.472 04:12:27 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:13.472 04:12:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:21:13.472 04:12:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:13.472 04:12:27 -- host/auth.sh@68 -- # digest=sha256 00:21:13.472 04:12:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:13.472 04:12:27 -- host/auth.sh@68 -- # keyid=2 00:21:13.472 04:12:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.472 04:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.472 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:13.472 04:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.472 04:12:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:13.472 04:12:27 -- nvmf/common.sh@717 -- # local ip 00:21:13.472 04:12:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:13.472 04:12:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:13.472 04:12:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.472 04:12:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.472 04:12:27 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:13.472 04:12:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:13.472 04:12:27 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:13.472 04:12:27 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:13.472 04:12:27 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:13.472 04:12:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:13.472 04:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.472 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:13.472 nvme0n1 00:21:13.472 04:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.730 04:12:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.730 04:12:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:13.730 04:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.730 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:13.730 04:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.730 04:12:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.730 04:12:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.730 04:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.730 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:13.730 04:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.730 04:12:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:13.730 04:12:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:13.730 04:12:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:13.730 04:12:28 -- host/auth.sh@44 -- # digest=sha256 00:21:13.730 04:12:28 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:13.730 04:12:28 -- host/auth.sh@44 -- # keyid=3 00:21:13.730 04:12:28 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:13.730 04:12:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:13.730 04:12:28 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:13.730 04:12:28 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:13.730 04:12:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:21:13.730 04:12:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:13.730 04:12:28 -- host/auth.sh@68 -- # digest=sha256 00:21:13.730 04:12:28 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:13.730 04:12:28 -- host/auth.sh@68 -- # keyid=3 00:21:13.730 04:12:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.730 04:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.730 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:13.730 04:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.730 04:12:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:13.730 04:12:28 -- nvmf/common.sh@717 -- # local ip 00:21:13.730 04:12:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:13.730 04:12:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:13.730 04:12:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.730 04:12:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.730 04:12:28 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:13.730 04:12:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:13.730 04:12:28 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:13.730 04:12:28 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:13.730 04:12:28 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:13.730 04:12:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:13.730 04:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.730 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:13.989 nvme0n1 00:21:13.989 04:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.989 04:12:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.989 04:12:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:13.989 04:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.989 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:13.989 04:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.989 04:12:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.989 04:12:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.989 04:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.989 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:13.989 04:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.989 04:12:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:13.989 04:12:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:13.989 04:12:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:13.989 04:12:28 -- host/auth.sh@44 -- # digest=sha256 00:21:13.989 04:12:28 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:13.989 04:12:28 -- host/auth.sh@44 -- # keyid=4 00:21:13.989 04:12:28 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:13.989 04:12:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:13.989 04:12:28 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:13.989 04:12:28 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:13.989 04:12:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:21:13.989 04:12:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:13.989 04:12:28 -- host/auth.sh@68 -- # digest=sha256 00:21:13.989 04:12:28 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:13.989 04:12:28 -- host/auth.sh@68 -- # keyid=4 00:21:13.989 04:12:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.989 04:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.989 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:13.989 04:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.989 04:12:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:13.989 04:12:28 -- nvmf/common.sh@717 -- # local ip 00:21:13.989 04:12:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:13.989 04:12:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:13.989 04:12:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.989 04:12:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.989 04:12:28 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:13.989 04:12:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:13.989 04:12:28 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:13.989 04:12:28 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:13.989 04:12:28 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:13.989 04:12:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:13.989 04:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.989 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:14.248 nvme0n1 00:21:14.248 04:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.248 04:12:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.248 04:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.248 04:12:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:14.248 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:14.248 04:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.248 04:12:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.248 04:12:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.248 04:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.248 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:14.248 04:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.248 04:12:28 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.248 04:12:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:14.248 04:12:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:14.248 04:12:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:14.248 04:12:28 -- host/auth.sh@44 -- # digest=sha256 00:21:14.248 04:12:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:14.248 04:12:28 -- host/auth.sh@44 -- # keyid=0 00:21:14.248 04:12:28 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:14.248 04:12:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:14.248 04:12:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:14.507 04:12:29 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:14.507 04:12:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:21:14.507 04:12:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:14.507 04:12:29 -- host/auth.sh@68 -- # digest=sha256 00:21:14.507 04:12:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:14.507 04:12:29 -- host/auth.sh@68 -- # keyid=0 00:21:14.507 04:12:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:14.507 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.507 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:14.507 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.507 04:12:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:14.507 04:12:29 -- nvmf/common.sh@717 -- # local ip 00:21:14.507 04:12:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:14.507 04:12:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:14.507 04:12:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.507 04:12:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.507 04:12:29 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:14.507 04:12:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:14.507 04:12:29 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:14.507 04:12:29 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:14.507 04:12:29 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:14.507 04:12:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:14.507 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.507 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:14.765 nvme0n1 00:21:14.765 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.765 04:12:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.765 04:12:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:14.765 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.765 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:14.765 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.023 04:12:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.023 04:12:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.023 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.023 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.023 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.023 04:12:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:15.023 04:12:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:15.023 04:12:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:15.023 04:12:29 -- host/auth.sh@44 -- # digest=sha256 00:21:15.023 04:12:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.023 04:12:29 -- host/auth.sh@44 -- # keyid=1 00:21:15.023 04:12:29 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:15.023 04:12:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:15.023 04:12:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:15.023 04:12:29 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:15.023 04:12:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:21:15.023 04:12:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:15.023 04:12:29 -- host/auth.sh@68 -- # digest=sha256 00:21:15.023 04:12:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:15.023 04:12:29 -- host/auth.sh@68 -- # keyid=1 00:21:15.023 04:12:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:15.023 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.023 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.023 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.023 04:12:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:15.023 04:12:29 -- nvmf/common.sh@717 -- # local ip 00:21:15.023 04:12:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:15.023 04:12:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:15.023 04:12:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.023 04:12:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.023 04:12:29 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:15.023 04:12:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.023 04:12:29 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.023 04:12:29 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:15.023 04:12:29 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:15.023 04:12:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:15.023 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.023 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.281 nvme0n1 00:21:15.281 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.282 04:12:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.282 04:12:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:15.282 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.282 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.282 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.282 04:12:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.282 04:12:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.282 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.282 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.282 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.282 04:12:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:15.282 04:12:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:21:15.282 04:12:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:15.282 04:12:29 -- host/auth.sh@44 -- # digest=sha256 00:21:15.282 04:12:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.282 04:12:29 -- host/auth.sh@44 -- # keyid=2 00:21:15.282 04:12:29 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:15.282 04:12:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:15.282 04:12:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:15.282 04:12:29 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:15.282 04:12:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:21:15.282 04:12:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:15.282 04:12:29 -- host/auth.sh@68 -- # digest=sha256 00:21:15.282 04:12:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:15.282 04:12:29 -- host/auth.sh@68 -- # keyid=2 00:21:15.282 04:12:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:15.282 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.282 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.282 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.282 04:12:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:15.282 04:12:29 -- nvmf/common.sh@717 -- # local ip 00:21:15.282 04:12:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:15.282 04:12:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:15.282 04:12:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.282 04:12:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.282 04:12:29 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:15.282 04:12:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.282 04:12:29 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.282 04:12:29 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:15.282 04:12:29 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:15.282 04:12:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:15.282 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.282 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.541 nvme0n1 00:21:15.541 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.541 04:12:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:15.541 04:12:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.541 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.541 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.541 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.541 04:12:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.541 04:12:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.542 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.542 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.542 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.542 04:12:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:15.542 04:12:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:21:15.542 04:12:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:15.542 04:12:29 -- host/auth.sh@44 -- # digest=sha256 00:21:15.542 04:12:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.542 04:12:29 -- host/auth.sh@44 -- # keyid=3 00:21:15.542 04:12:29 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:15.542 04:12:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:15.542 04:12:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:15.542 04:12:29 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:15.542 04:12:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:21:15.542 04:12:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:15.542 04:12:29 -- host/auth.sh@68 -- # digest=sha256 00:21:15.542 04:12:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:15.542 04:12:29 -- host/auth.sh@68 -- # keyid=3 00:21:15.542 04:12:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:15.542 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.542 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.542 04:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.542 04:12:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:15.542 04:12:29 -- nvmf/common.sh@717 -- # local ip 00:21:15.542 04:12:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:15.542 04:12:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:15.542 04:12:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.542 04:12:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.542 04:12:29 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:15.542 04:12:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.542 04:12:29 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.542 04:12:29 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:15.542 04:12:29 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:15.542 04:12:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:15.542 04:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.542 04:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.800 nvme0n1 00:21:15.800 04:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.800 04:12:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.800 04:12:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:15.801 04:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.801 04:12:30 -- common/autotest_common.sh@10 -- # set +x 00:21:15.801 04:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.801 04:12:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.801 04:12:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.801 04:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.801 04:12:30 -- common/autotest_common.sh@10 -- # set +x 00:21:15.801 04:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.801 04:12:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:15.801 04:12:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:21:15.801 04:12:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:15.801 04:12:30 -- host/auth.sh@44 -- # digest=sha256 00:21:15.801 04:12:30 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.801 04:12:30 -- host/auth.sh@44 -- # keyid=4 00:21:15.801 04:12:30 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:15.801 04:12:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:15.801 04:12:30 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:15.801 04:12:30 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:15.801 04:12:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:21:15.801 04:12:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:15.801 04:12:30 -- host/auth.sh@68 -- # digest=sha256 00:21:15.801 04:12:30 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:15.801 04:12:30 -- host/auth.sh@68 -- # keyid=4 00:21:15.801 04:12:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:15.801 04:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.801 04:12:30 -- common/autotest_common.sh@10 -- # set +x 00:21:15.801 04:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.801 04:12:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:15.801 04:12:30 -- nvmf/common.sh@717 -- # local ip 00:21:15.801 04:12:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:15.801 04:12:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:15.801 04:12:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.801 04:12:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.801 04:12:30 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:15.801 04:12:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.801 04:12:30 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.801 04:12:30 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:15.801 04:12:30 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:15.801 04:12:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:15.801 04:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.801 04:12:30 -- common/autotest_common.sh@10 -- # set +x 00:21:16.059 nvme0n1 00:21:16.059 04:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.059 04:12:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:16.059 04:12:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.059 04:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.059 04:12:30 -- common/autotest_common.sh@10 -- # set +x 00:21:16.059 04:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.317 04:12:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.317 04:12:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.317 04:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.317 04:12:30 -- common/autotest_common.sh@10 -- # set +x 00:21:16.317 04:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.317 04:12:30 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.317 04:12:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:16.317 04:12:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:21:16.317 04:12:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:16.317 04:12:30 -- host/auth.sh@44 -- # digest=sha256 00:21:16.317 04:12:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:16.317 04:12:30 -- host/auth.sh@44 -- # keyid=0 00:21:16.317 04:12:30 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:16.317 04:12:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:16.317 04:12:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:17.691 04:12:31 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:17.691 04:12:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:21:17.691 04:12:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:17.691 04:12:31 -- host/auth.sh@68 -- # digest=sha256 00:21:17.691 04:12:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:17.691 04:12:31 -- host/auth.sh@68 -- # keyid=0 00:21:17.691 04:12:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:17.691 04:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.691 04:12:31 -- common/autotest_common.sh@10 -- # set +x 00:21:17.691 04:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.691 04:12:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:17.691 04:12:31 -- nvmf/common.sh@717 -- # local ip 00:21:17.691 04:12:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:17.691 04:12:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:17.691 04:12:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.691 04:12:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.691 04:12:31 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:17.691 04:12:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:17.691 04:12:31 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:17.691 04:12:31 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:17.691 04:12:31 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:17.691 04:12:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:17.691 04:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.691 04:12:31 -- common/autotest_common.sh@10 -- # set +x 00:21:17.691 nvme0n1 00:21:17.691 04:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.691 04:12:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.691 04:12:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:17.691 04:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.691 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:21:17.691 04:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.691 04:12:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.691 04:12:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.691 04:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.691 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:21:17.950 04:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.950 04:12:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:17.950 04:12:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:21:17.950 04:12:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:17.950 04:12:32 -- host/auth.sh@44 -- # digest=sha256 00:21:17.950 04:12:32 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:17.950 04:12:32 -- host/auth.sh@44 -- # keyid=1 00:21:17.950 04:12:32 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:17.950 04:12:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:17.950 04:12:32 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:17.950 04:12:32 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:17.950 04:12:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:21:17.950 04:12:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:17.950 04:12:32 -- host/auth.sh@68 -- # digest=sha256 00:21:17.950 04:12:32 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:17.950 04:12:32 -- host/auth.sh@68 -- # keyid=1 00:21:17.950 04:12:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:17.950 04:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.950 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:21:17.950 04:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.950 04:12:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:17.950 04:12:32 -- nvmf/common.sh@717 -- # local ip 00:21:17.950 04:12:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:17.950 04:12:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:17.950 04:12:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.950 04:12:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.950 04:12:32 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:17.950 04:12:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:17.950 04:12:32 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:17.950 04:12:32 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:17.950 04:12:32 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:17.950 04:12:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:17.950 04:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.950 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:21:18.208 nvme0n1 00:21:18.208 04:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.208 04:12:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.208 04:12:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:18.208 04:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.208 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:21:18.208 04:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.208 04:12:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.208 04:12:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.208 04:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.208 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:21:18.208 04:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.208 04:12:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:18.208 04:12:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:18.208 04:12:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:18.208 04:12:32 -- host/auth.sh@44 -- # digest=sha256 00:21:18.208 04:12:32 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:18.208 04:12:32 -- host/auth.sh@44 -- # keyid=2 00:21:18.208 04:12:32 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:18.208 04:12:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:18.208 04:12:32 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:18.208 04:12:32 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:18.208 04:12:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:21:18.208 04:12:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:18.208 04:12:32 -- host/auth.sh@68 -- # digest=sha256 00:21:18.208 04:12:32 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:18.208 04:12:32 -- host/auth.sh@68 -- # keyid=2 00:21:18.208 04:12:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:18.208 04:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.208 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:21:18.208 04:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.208 04:12:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:18.208 04:12:32 -- nvmf/common.sh@717 -- # local ip 00:21:18.208 04:12:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:18.208 04:12:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:18.208 04:12:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.208 04:12:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.208 04:12:32 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:18.208 04:12:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:18.208 04:12:32 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:18.208 04:12:32 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:18.208 04:12:32 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:18.208 04:12:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:18.208 04:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.208 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:21:18.774 nvme0n1 00:21:18.774 04:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.774 04:12:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.774 04:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.774 04:12:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:18.774 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:21:18.774 04:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.774 04:12:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.774 04:12:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.774 04:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.774 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:21:18.774 04:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.774 04:12:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:18.774 04:12:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:18.774 04:12:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:18.774 04:12:33 -- host/auth.sh@44 -- # digest=sha256 00:21:18.774 04:12:33 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:18.774 04:12:33 -- host/auth.sh@44 -- # keyid=3 00:21:18.774 04:12:33 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:18.774 04:12:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:18.774 04:12:33 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:18.774 04:12:33 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:18.774 04:12:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:21:18.774 04:12:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:18.774 04:12:33 -- host/auth.sh@68 -- # digest=sha256 00:21:18.774 04:12:33 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:18.774 04:12:33 -- host/auth.sh@68 -- # keyid=3 00:21:18.774 04:12:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:18.774 04:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.774 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:21:18.774 04:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.774 04:12:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:18.774 04:12:33 -- nvmf/common.sh@717 -- # local ip 00:21:18.774 04:12:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:18.774 04:12:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:18.774 04:12:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.774 04:12:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.774 04:12:33 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:18.774 04:12:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:18.775 04:12:33 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:18.775 04:12:33 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:18.775 04:12:33 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:18.775 04:12:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:18.775 04:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.775 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:21:19.032 nvme0n1 00:21:19.032 04:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.032 04:12:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.032 04:12:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:19.032 04:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.032 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:21:19.032 04:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.032 04:12:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.032 04:12:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.032 04:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.032 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:21:19.033 04:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.033 04:12:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:19.033 04:12:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:19.033 04:12:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:19.033 04:12:33 -- host/auth.sh@44 -- # digest=sha256 00:21:19.033 04:12:33 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:19.033 04:12:33 -- host/auth.sh@44 -- # keyid=4 00:21:19.033 04:12:33 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:19.033 04:12:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:19.033 04:12:33 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:19.033 04:12:33 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:19.033 04:12:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:21:19.033 04:12:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:19.033 04:12:33 -- host/auth.sh@68 -- # digest=sha256 00:21:19.033 04:12:33 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:19.033 04:12:33 -- host/auth.sh@68 -- # keyid=4 00:21:19.033 04:12:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:19.033 04:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.033 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:21:19.033 04:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.033 04:12:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:19.033 04:12:33 -- nvmf/common.sh@717 -- # local ip 00:21:19.033 04:12:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:19.033 04:12:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:19.033 04:12:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.033 04:12:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.033 04:12:33 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:19.033 04:12:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:19.033 04:12:33 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:19.033 04:12:33 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:19.033 04:12:33 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:19.033 04:12:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:19.033 04:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.033 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:21:19.599 nvme0n1 00:21:19.599 04:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.599 04:12:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.599 04:12:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:19.599 04:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.599 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:21:19.599 04:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.599 04:12:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.599 04:12:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.599 04:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.599 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:21:19.599 04:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.599 04:12:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.599 04:12:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:19.599 04:12:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:19.599 04:12:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:19.599 04:12:33 -- host/auth.sh@44 -- # digest=sha256 00:21:19.599 04:12:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:19.599 04:12:33 -- host/auth.sh@44 -- # keyid=0 00:21:19.599 04:12:33 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:19.599 04:12:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:19.599 04:12:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:22.128 04:12:36 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:22.128 04:12:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:21:22.128 04:12:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:22.128 04:12:36 -- host/auth.sh@68 -- # digest=sha256 00:21:22.128 04:12:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:22.128 04:12:36 -- host/auth.sh@68 -- # keyid=0 00:21:22.128 04:12:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:22.128 04:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.128 04:12:36 -- common/autotest_common.sh@10 -- # set +x 00:21:22.128 04:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.128 04:12:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:22.128 04:12:36 -- nvmf/common.sh@717 -- # local ip 00:21:22.128 04:12:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:22.128 04:12:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:22.128 04:12:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.128 04:12:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.128 04:12:36 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:22.128 04:12:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:22.128 04:12:36 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:22.128 04:12:36 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:22.128 04:12:36 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:22.128 04:12:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:22.128 04:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.128 04:12:36 -- common/autotest_common.sh@10 -- # set +x 00:21:22.695 nvme0n1 00:21:22.695 04:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.695 04:12:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.695 04:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.695 04:12:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:22.695 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.695 04:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.695 04:12:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.695 04:12:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.695 04:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.695 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.695 04:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.695 04:12:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:22.695 04:12:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:22.695 04:12:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:22.695 04:12:37 -- host/auth.sh@44 -- # digest=sha256 00:21:22.695 04:12:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:22.695 04:12:37 -- host/auth.sh@44 -- # keyid=1 00:21:22.695 04:12:37 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:22.695 04:12:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:22.695 04:12:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:22.695 04:12:37 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:22.695 04:12:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:21:22.695 04:12:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:22.695 04:12:37 -- host/auth.sh@68 -- # digest=sha256 00:21:22.695 04:12:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:22.695 04:12:37 -- host/auth.sh@68 -- # keyid=1 00:21:22.695 04:12:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:22.695 04:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.695 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.695 04:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.695 04:12:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:22.695 04:12:37 -- nvmf/common.sh@717 -- # local ip 00:21:22.695 04:12:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:22.695 04:12:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:22.695 04:12:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.695 04:12:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.695 04:12:37 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:22.695 04:12:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:22.695 04:12:37 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:22.695 04:12:37 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:22.695 04:12:37 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:22.695 04:12:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:22.695 04:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.695 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:21:23.262 nvme0n1 00:21:23.262 04:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.262 04:12:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.262 04:12:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:23.262 04:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.262 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:21:23.262 04:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.262 04:12:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.262 04:12:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.262 04:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.262 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:21:23.262 04:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.262 04:12:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:23.262 04:12:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:23.262 04:12:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:23.262 04:12:37 -- host/auth.sh@44 -- # digest=sha256 00:21:23.262 04:12:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:23.262 04:12:37 -- host/auth.sh@44 -- # keyid=2 00:21:23.262 04:12:37 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:23.262 04:12:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:23.262 04:12:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:23.262 04:12:37 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:23.262 04:12:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:21:23.262 04:12:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:23.262 04:12:37 -- host/auth.sh@68 -- # digest=sha256 00:21:23.262 04:12:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:23.262 04:12:37 -- host/auth.sh@68 -- # keyid=2 00:21:23.262 04:12:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:23.262 04:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.262 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:21:23.262 04:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.262 04:12:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:23.262 04:12:37 -- nvmf/common.sh@717 -- # local ip 00:21:23.262 04:12:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:23.262 04:12:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:23.262 04:12:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.262 04:12:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.262 04:12:37 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:23.262 04:12:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:23.262 04:12:37 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:23.262 04:12:37 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:23.262 04:12:37 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:23.262 04:12:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:23.262 04:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.262 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:21:23.829 nvme0n1 00:21:23.829 04:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.829 04:12:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.829 04:12:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:23.829 04:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.829 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:21:23.829 04:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.829 04:12:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.829 04:12:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.829 04:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.829 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:21:23.829 04:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.829 04:12:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:23.829 04:12:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:23.829 04:12:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:23.829 04:12:38 -- host/auth.sh@44 -- # digest=sha256 00:21:23.829 04:12:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:23.829 04:12:38 -- host/auth.sh@44 -- # keyid=3 00:21:23.829 04:12:38 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:23.829 04:12:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:23.829 04:12:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:23.829 04:12:38 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:23.829 04:12:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:21:23.829 04:12:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:23.829 04:12:38 -- host/auth.sh@68 -- # digest=sha256 00:21:23.829 04:12:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:23.829 04:12:38 -- host/auth.sh@68 -- # keyid=3 00:21:23.829 04:12:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:23.829 04:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.829 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:21:23.829 04:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.829 04:12:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:23.829 04:12:38 -- nvmf/common.sh@717 -- # local ip 00:21:23.829 04:12:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:23.829 04:12:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:23.829 04:12:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.829 04:12:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.829 04:12:38 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:23.829 04:12:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:23.829 04:12:38 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:23.829 04:12:38 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:23.829 04:12:38 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:23.829 04:12:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:23.829 04:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.829 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:21:24.396 nvme0n1 00:21:24.396 04:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.396 04:12:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.396 04:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.396 04:12:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:24.396 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:21:24.396 04:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.396 04:12:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.396 04:12:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.396 04:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.396 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:21:24.654 04:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.654 04:12:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:24.654 04:12:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:24.654 04:12:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:24.654 04:12:38 -- host/auth.sh@44 -- # digest=sha256 00:21:24.654 04:12:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:24.654 04:12:38 -- host/auth.sh@44 -- # keyid=4 00:21:24.654 04:12:38 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:24.654 04:12:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:24.654 04:12:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:24.654 04:12:38 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:24.654 04:12:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:21:24.654 04:12:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:24.654 04:12:38 -- host/auth.sh@68 -- # digest=sha256 00:21:24.654 04:12:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:24.654 04:12:38 -- host/auth.sh@68 -- # keyid=4 00:21:24.654 04:12:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:24.654 04:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.654 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:21:24.654 04:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.654 04:12:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:24.654 04:12:38 -- nvmf/common.sh@717 -- # local ip 00:21:24.654 04:12:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:24.654 04:12:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:24.654 04:12:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.654 04:12:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.654 04:12:38 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:24.654 04:12:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:24.654 04:12:38 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:24.654 04:12:38 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:24.654 04:12:38 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:24.654 04:12:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:24.654 04:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.654 04:12:38 -- common/autotest_common.sh@10 -- # set +x 00:21:25.221 nvme0n1 00:21:25.221 04:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.221 04:12:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.221 04:12:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:25.221 04:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.221 04:12:39 -- common/autotest_common.sh@10 -- # set +x 00:21:25.221 04:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.221 04:12:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.221 04:12:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.221 04:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.221 04:12:39 -- common/autotest_common.sh@10 -- # set +x 00:21:25.221 04:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.221 04:12:39 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:21:25.221 04:12:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.221 04:12:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:25.221 04:12:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:25.221 04:12:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:25.221 04:12:39 -- host/auth.sh@44 -- # digest=sha384 00:21:25.221 04:12:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:25.221 04:12:39 -- host/auth.sh@44 -- # keyid=0 00:21:25.221 04:12:39 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:25.221 04:12:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:25.221 04:12:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:25.221 04:12:39 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:25.221 04:12:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:21:25.221 04:12:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:25.221 04:12:39 -- host/auth.sh@68 -- # digest=sha384 00:21:25.221 04:12:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:25.221 04:12:39 -- host/auth.sh@68 -- # keyid=0 00:21:25.221 04:12:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.221 04:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.221 04:12:39 -- common/autotest_common.sh@10 -- # set +x 00:21:25.221 04:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.221 04:12:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:25.221 04:12:39 -- nvmf/common.sh@717 -- # local ip 00:21:25.221 04:12:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:25.221 04:12:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:25.221 04:12:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.221 04:12:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.221 04:12:39 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:25.221 04:12:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:25.221 04:12:39 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:25.221 04:12:39 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:25.221 04:12:39 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:25.221 04:12:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:25.221 04:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.221 04:12:39 -- common/autotest_common.sh@10 -- # set +x 00:21:25.221 nvme0n1 00:21:25.221 04:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.221 04:12:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.221 04:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.221 04:12:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:25.221 04:12:39 -- common/autotest_common.sh@10 -- # set +x 00:21:25.221 04:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.480 04:12:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.480 04:12:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.480 04:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.480 04:12:39 -- common/autotest_common.sh@10 -- # set +x 00:21:25.480 04:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.480 04:12:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:25.480 04:12:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:25.480 04:12:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:25.480 04:12:39 -- host/auth.sh@44 -- # digest=sha384 00:21:25.480 04:12:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:25.480 04:12:39 -- host/auth.sh@44 -- # keyid=1 00:21:25.480 04:12:39 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:25.480 04:12:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:25.480 04:12:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:25.480 04:12:39 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:25.480 04:12:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:21:25.480 04:12:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:25.480 04:12:39 -- host/auth.sh@68 -- # digest=sha384 00:21:25.480 04:12:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:25.480 04:12:39 -- host/auth.sh@68 -- # keyid=1 00:21:25.480 04:12:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.480 04:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.480 04:12:39 -- common/autotest_common.sh@10 -- # set +x 00:21:25.480 04:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.480 04:12:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:25.480 04:12:39 -- nvmf/common.sh@717 -- # local ip 00:21:25.480 04:12:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:25.480 04:12:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:25.480 04:12:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.480 04:12:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.480 04:12:39 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:25.480 04:12:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:25.480 04:12:39 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:25.480 04:12:39 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:25.480 04:12:39 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:25.480 04:12:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:25.480 04:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.480 04:12:39 -- common/autotest_common.sh@10 -- # set +x 00:21:25.480 nvme0n1 00:21:25.480 04:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.480 04:12:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.480 04:12:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:25.480 04:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.480 04:12:39 -- common/autotest_common.sh@10 -- # set +x 00:21:25.480 04:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.739 04:12:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.739 04:12:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.739 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.739 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:25.739 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.739 04:12:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:25.739 04:12:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:25.739 04:12:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:25.739 04:12:40 -- host/auth.sh@44 -- # digest=sha384 00:21:25.739 04:12:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:25.739 04:12:40 -- host/auth.sh@44 -- # keyid=2 00:21:25.739 04:12:40 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:25.739 04:12:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:25.739 04:12:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:25.739 04:12:40 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:25.739 04:12:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:21:25.739 04:12:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:25.739 04:12:40 -- host/auth.sh@68 -- # digest=sha384 00:21:25.739 04:12:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:25.739 04:12:40 -- host/auth.sh@68 -- # keyid=2 00:21:25.739 04:12:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.739 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.739 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:25.739 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.739 04:12:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:25.739 04:12:40 -- nvmf/common.sh@717 -- # local ip 00:21:25.739 04:12:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:25.739 04:12:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:25.739 04:12:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.739 04:12:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.739 04:12:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:25.739 04:12:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:25.739 04:12:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:25.739 04:12:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:25.739 04:12:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:25.739 04:12:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:25.739 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.739 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:25.739 nvme0n1 00:21:25.739 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.739 04:12:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.740 04:12:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:25.740 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.740 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:25.740 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.998 04:12:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.998 04:12:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.998 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.998 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:25.998 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.998 04:12:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:25.998 04:12:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:25.998 04:12:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:25.998 04:12:40 -- host/auth.sh@44 -- # digest=sha384 00:21:25.998 04:12:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:25.998 04:12:40 -- host/auth.sh@44 -- # keyid=3 00:21:25.998 04:12:40 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:25.998 04:12:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:25.998 04:12:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:25.998 04:12:40 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:25.998 04:12:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:21:25.998 04:12:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:25.998 04:12:40 -- host/auth.sh@68 -- # digest=sha384 00:21:25.998 04:12:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:25.998 04:12:40 -- host/auth.sh@68 -- # keyid=3 00:21:25.998 04:12:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.998 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.998 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:25.998 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.998 04:12:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:25.998 04:12:40 -- nvmf/common.sh@717 -- # local ip 00:21:25.998 04:12:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:25.998 04:12:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:25.998 04:12:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.998 04:12:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.998 04:12:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:25.998 04:12:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:25.998 04:12:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:25.998 04:12:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:25.998 04:12:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:25.998 04:12:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:25.998 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.998 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:25.998 nvme0n1 00:21:25.998 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.998 04:12:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.998 04:12:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:25.998 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.998 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:25.998 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.257 04:12:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.257 04:12:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.257 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.257 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:26.257 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.257 04:12:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:26.257 04:12:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:26.257 04:12:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:26.257 04:12:40 -- host/auth.sh@44 -- # digest=sha384 00:21:26.257 04:12:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:26.257 04:12:40 -- host/auth.sh@44 -- # keyid=4 00:21:26.257 04:12:40 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:26.257 04:12:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:26.257 04:12:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:26.257 04:12:40 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:26.257 04:12:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:21:26.257 04:12:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:26.257 04:12:40 -- host/auth.sh@68 -- # digest=sha384 00:21:26.257 04:12:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:26.257 04:12:40 -- host/auth.sh@68 -- # keyid=4 00:21:26.257 04:12:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.257 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.257 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:26.257 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.257 04:12:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:26.257 04:12:40 -- nvmf/common.sh@717 -- # local ip 00:21:26.257 04:12:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:26.257 04:12:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:26.257 04:12:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.257 04:12:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.257 04:12:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:26.257 04:12:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:26.257 04:12:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:26.257 04:12:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:26.257 04:12:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:26.257 04:12:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:26.257 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.257 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:26.257 nvme0n1 00:21:26.257 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.257 04:12:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.257 04:12:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:26.257 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.257 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:26.257 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.515 04:12:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.515 04:12:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.515 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.515 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:26.515 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.515 04:12:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.515 04:12:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:26.515 04:12:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:26.515 04:12:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:26.515 04:12:40 -- host/auth.sh@44 -- # digest=sha384 00:21:26.515 04:12:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:26.515 04:12:40 -- host/auth.sh@44 -- # keyid=0 00:21:26.515 04:12:40 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:26.516 04:12:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:26.516 04:12:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:26.516 04:12:40 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:26.516 04:12:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:21:26.516 04:12:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:26.516 04:12:40 -- host/auth.sh@68 -- # digest=sha384 00:21:26.516 04:12:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:26.516 04:12:40 -- host/auth.sh@68 -- # keyid=0 00:21:26.516 04:12:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:26.516 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.516 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:26.516 04:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.516 04:12:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:26.516 04:12:40 -- nvmf/common.sh@717 -- # local ip 00:21:26.516 04:12:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:26.516 04:12:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:26.516 04:12:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.516 04:12:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.516 04:12:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:26.516 04:12:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:26.516 04:12:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:26.516 04:12:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:26.516 04:12:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:26.516 04:12:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:26.516 04:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.516 04:12:40 -- common/autotest_common.sh@10 -- # set +x 00:21:26.516 nvme0n1 00:21:26.516 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.774 04:12:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.774 04:12:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:26.774 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.774 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:26.774 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.774 04:12:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.774 04:12:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.774 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.774 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:26.774 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.774 04:12:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:26.774 04:12:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:26.774 04:12:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:26.774 04:12:41 -- host/auth.sh@44 -- # digest=sha384 00:21:26.774 04:12:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:26.774 04:12:41 -- host/auth.sh@44 -- # keyid=1 00:21:26.774 04:12:41 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:26.774 04:12:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:26.774 04:12:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:26.774 04:12:41 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:26.774 04:12:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:21:26.774 04:12:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:26.774 04:12:41 -- host/auth.sh@68 -- # digest=sha384 00:21:26.774 04:12:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:26.774 04:12:41 -- host/auth.sh@68 -- # keyid=1 00:21:26.774 04:12:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:26.774 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.774 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:26.774 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.774 04:12:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:26.774 04:12:41 -- nvmf/common.sh@717 -- # local ip 00:21:26.774 04:12:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:26.774 04:12:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:26.774 04:12:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.774 04:12:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.774 04:12:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:26.774 04:12:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:26.774 04:12:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:26.774 04:12:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:26.774 04:12:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:26.774 04:12:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:26.774 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.774 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.032 nvme0n1 00:21:27.032 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.032 04:12:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.032 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.032 04:12:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:27.032 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.032 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.032 04:12:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.032 04:12:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.032 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.032 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.032 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.032 04:12:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:27.032 04:12:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:27.032 04:12:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:27.032 04:12:41 -- host/auth.sh@44 -- # digest=sha384 00:21:27.032 04:12:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:27.032 04:12:41 -- host/auth.sh@44 -- # keyid=2 00:21:27.032 04:12:41 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:27.032 04:12:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:27.032 04:12:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:27.032 04:12:41 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:27.032 04:12:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:21:27.032 04:12:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:27.032 04:12:41 -- host/auth.sh@68 -- # digest=sha384 00:21:27.032 04:12:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:27.032 04:12:41 -- host/auth.sh@68 -- # keyid=2 00:21:27.032 04:12:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:27.032 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.032 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.032 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.032 04:12:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:27.032 04:12:41 -- nvmf/common.sh@717 -- # local ip 00:21:27.032 04:12:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:27.032 04:12:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:27.032 04:12:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.032 04:12:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.032 04:12:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:27.032 04:12:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:27.032 04:12:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:27.032 04:12:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:27.032 04:12:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:27.032 04:12:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:27.032 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.032 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.289 nvme0n1 00:21:27.289 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.289 04:12:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.289 04:12:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:27.289 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.289 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.289 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.289 04:12:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.289 04:12:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.289 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.289 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.289 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.289 04:12:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:27.289 04:12:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:27.289 04:12:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:27.289 04:12:41 -- host/auth.sh@44 -- # digest=sha384 00:21:27.289 04:12:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:27.289 04:12:41 -- host/auth.sh@44 -- # keyid=3 00:21:27.289 04:12:41 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:27.289 04:12:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:27.289 04:12:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:27.289 04:12:41 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:27.289 04:12:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:21:27.289 04:12:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:27.289 04:12:41 -- host/auth.sh@68 -- # digest=sha384 00:21:27.289 04:12:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:27.289 04:12:41 -- host/auth.sh@68 -- # keyid=3 00:21:27.289 04:12:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:27.289 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.289 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.289 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.289 04:12:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:27.289 04:12:41 -- nvmf/common.sh@717 -- # local ip 00:21:27.289 04:12:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:27.289 04:12:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:27.289 04:12:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.289 04:12:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.289 04:12:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:27.289 04:12:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:27.289 04:12:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:27.289 04:12:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:27.289 04:12:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:27.289 04:12:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:27.289 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.289 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.547 nvme0n1 00:21:27.547 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.547 04:12:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.547 04:12:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:27.547 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.547 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.547 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.547 04:12:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.547 04:12:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.547 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.547 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.547 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.547 04:12:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:27.547 04:12:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:27.547 04:12:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:27.547 04:12:41 -- host/auth.sh@44 -- # digest=sha384 00:21:27.547 04:12:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:27.547 04:12:41 -- host/auth.sh@44 -- # keyid=4 00:21:27.547 04:12:41 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:27.547 04:12:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:27.547 04:12:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:27.547 04:12:41 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:27.547 04:12:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:21:27.547 04:12:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:27.547 04:12:41 -- host/auth.sh@68 -- # digest=sha384 00:21:27.547 04:12:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:27.547 04:12:41 -- host/auth.sh@68 -- # keyid=4 00:21:27.547 04:12:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:27.547 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.547 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.547 04:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.547 04:12:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:27.547 04:12:41 -- nvmf/common.sh@717 -- # local ip 00:21:27.547 04:12:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:27.547 04:12:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:27.547 04:12:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.547 04:12:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.547 04:12:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:27.547 04:12:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:27.547 04:12:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:27.547 04:12:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:27.547 04:12:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:27.547 04:12:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:27.547 04:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.547 04:12:41 -- common/autotest_common.sh@10 -- # set +x 00:21:27.805 nvme0n1 00:21:27.805 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.805 04:12:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:27.805 04:12:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.805 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.805 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:27.805 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.805 04:12:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.805 04:12:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.805 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.805 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:27.805 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.805 04:12:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.805 04:12:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:27.805 04:12:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:27.805 04:12:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:27.805 04:12:42 -- host/auth.sh@44 -- # digest=sha384 00:21:27.805 04:12:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:27.805 04:12:42 -- host/auth.sh@44 -- # keyid=0 00:21:27.805 04:12:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:27.805 04:12:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:27.805 04:12:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:27.805 04:12:42 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:27.805 04:12:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:21:27.805 04:12:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:27.805 04:12:42 -- host/auth.sh@68 -- # digest=sha384 00:21:27.806 04:12:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:27.806 04:12:42 -- host/auth.sh@68 -- # keyid=0 00:21:27.806 04:12:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:27.806 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.806 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:27.806 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.806 04:12:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:27.806 04:12:42 -- nvmf/common.sh@717 -- # local ip 00:21:27.806 04:12:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:27.806 04:12:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:27.806 04:12:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.806 04:12:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.806 04:12:42 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:27.806 04:12:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:27.806 04:12:42 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:27.806 04:12:42 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:27.806 04:12:42 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:27.806 04:12:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:27.806 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.806 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:28.064 nvme0n1 00:21:28.064 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.064 04:12:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.064 04:12:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:28.064 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.064 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:28.064 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.064 04:12:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.064 04:12:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.064 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.064 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:28.064 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.064 04:12:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:28.064 04:12:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:28.064 04:12:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:28.064 04:12:42 -- host/auth.sh@44 -- # digest=sha384 00:21:28.064 04:12:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:28.064 04:12:42 -- host/auth.sh@44 -- # keyid=1 00:21:28.064 04:12:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:28.064 04:12:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:28.064 04:12:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:28.064 04:12:42 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:28.064 04:12:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:21:28.064 04:12:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:28.064 04:12:42 -- host/auth.sh@68 -- # digest=sha384 00:21:28.064 04:12:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:28.064 04:12:42 -- host/auth.sh@68 -- # keyid=1 00:21:28.064 04:12:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.064 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.064 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:28.064 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.064 04:12:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:28.064 04:12:42 -- nvmf/common.sh@717 -- # local ip 00:21:28.064 04:12:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:28.064 04:12:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:28.064 04:12:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.064 04:12:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.064 04:12:42 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:28.064 04:12:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:28.064 04:12:42 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:28.064 04:12:42 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:28.064 04:12:42 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:28.064 04:12:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:28.064 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.064 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:28.323 nvme0n1 00:21:28.323 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.323 04:12:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.323 04:12:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:28.323 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.323 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:28.323 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.323 04:12:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.323 04:12:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.323 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.323 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:28.323 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.323 04:12:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:28.323 04:12:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:28.323 04:12:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:28.323 04:12:42 -- host/auth.sh@44 -- # digest=sha384 00:21:28.323 04:12:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:28.323 04:12:42 -- host/auth.sh@44 -- # keyid=2 00:21:28.323 04:12:42 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:28.323 04:12:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:28.323 04:12:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:28.323 04:12:42 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:28.323 04:12:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:21:28.323 04:12:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:28.323 04:12:42 -- host/auth.sh@68 -- # digest=sha384 00:21:28.323 04:12:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:28.323 04:12:42 -- host/auth.sh@68 -- # keyid=2 00:21:28.323 04:12:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.323 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.323 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:28.582 04:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.582 04:12:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:28.582 04:12:42 -- nvmf/common.sh@717 -- # local ip 00:21:28.582 04:12:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:28.582 04:12:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:28.582 04:12:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.582 04:12:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.582 04:12:42 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:28.582 04:12:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:28.582 04:12:42 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:28.582 04:12:42 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:28.582 04:12:42 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:28.582 04:12:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:28.582 04:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.582 04:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:28.582 nvme0n1 00:21:28.582 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.582 04:12:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.582 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.582 04:12:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:28.582 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:28.839 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.839 04:12:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.839 04:12:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.839 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.839 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:28.839 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.839 04:12:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:28.839 04:12:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:28.839 04:12:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:28.839 04:12:43 -- host/auth.sh@44 -- # digest=sha384 00:21:28.839 04:12:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:28.839 04:12:43 -- host/auth.sh@44 -- # keyid=3 00:21:28.839 04:12:43 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:28.839 04:12:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:28.839 04:12:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:28.839 04:12:43 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:28.839 04:12:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:21:28.839 04:12:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:28.839 04:12:43 -- host/auth.sh@68 -- # digest=sha384 00:21:28.839 04:12:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:28.839 04:12:43 -- host/auth.sh@68 -- # keyid=3 00:21:28.839 04:12:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.839 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.839 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:28.839 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.839 04:12:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:28.839 04:12:43 -- nvmf/common.sh@717 -- # local ip 00:21:28.839 04:12:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:28.839 04:12:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:28.839 04:12:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.839 04:12:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.839 04:12:43 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:28.839 04:12:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:28.839 04:12:43 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:28.839 04:12:43 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:28.839 04:12:43 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:28.839 04:12:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:28.839 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.839 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:29.097 nvme0n1 00:21:29.097 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.097 04:12:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.097 04:12:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:29.097 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.097 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:29.097 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.097 04:12:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.097 04:12:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.097 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.097 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:29.097 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.097 04:12:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:29.097 04:12:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:29.097 04:12:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:29.097 04:12:43 -- host/auth.sh@44 -- # digest=sha384 00:21:29.097 04:12:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:29.097 04:12:43 -- host/auth.sh@44 -- # keyid=4 00:21:29.097 04:12:43 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:29.097 04:12:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:29.097 04:12:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:29.097 04:12:43 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:29.097 04:12:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:21:29.097 04:12:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:29.097 04:12:43 -- host/auth.sh@68 -- # digest=sha384 00:21:29.097 04:12:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:29.097 04:12:43 -- host/auth.sh@68 -- # keyid=4 00:21:29.097 04:12:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:29.097 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.097 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:29.097 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.097 04:12:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:29.097 04:12:43 -- nvmf/common.sh@717 -- # local ip 00:21:29.097 04:12:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:29.097 04:12:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:29.097 04:12:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.097 04:12:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.097 04:12:43 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:29.097 04:12:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:29.097 04:12:43 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:29.097 04:12:43 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:29.097 04:12:43 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:29.097 04:12:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:29.097 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.097 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:29.356 nvme0n1 00:21:29.356 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.356 04:12:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.356 04:12:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:29.356 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.356 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:29.356 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.356 04:12:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.356 04:12:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.356 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.356 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:29.356 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.356 04:12:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.356 04:12:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:29.356 04:12:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:29.356 04:12:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:29.356 04:12:43 -- host/auth.sh@44 -- # digest=sha384 00:21:29.356 04:12:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:29.356 04:12:43 -- host/auth.sh@44 -- # keyid=0 00:21:29.356 04:12:43 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:29.356 04:12:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:29.356 04:12:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:29.356 04:12:43 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:29.356 04:12:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:21:29.356 04:12:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:29.356 04:12:43 -- host/auth.sh@68 -- # digest=sha384 00:21:29.356 04:12:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:29.356 04:12:43 -- host/auth.sh@68 -- # keyid=0 00:21:29.356 04:12:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.356 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.356 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:29.356 04:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.356 04:12:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:29.356 04:12:43 -- nvmf/common.sh@717 -- # local ip 00:21:29.356 04:12:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:29.356 04:12:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:29.356 04:12:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.356 04:12:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.356 04:12:43 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:29.356 04:12:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:29.356 04:12:43 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:29.356 04:12:43 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:29.356 04:12:43 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:29.356 04:12:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:29.356 04:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.356 04:12:43 -- common/autotest_common.sh@10 -- # set +x 00:21:29.921 nvme0n1 00:21:29.921 04:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.921 04:12:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.921 04:12:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:29.921 04:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.921 04:12:44 -- common/autotest_common.sh@10 -- # set +x 00:21:29.921 04:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.921 04:12:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.921 04:12:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.921 04:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.921 04:12:44 -- common/autotest_common.sh@10 -- # set +x 00:21:29.921 04:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.921 04:12:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:29.921 04:12:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:29.921 04:12:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:29.921 04:12:44 -- host/auth.sh@44 -- # digest=sha384 00:21:29.921 04:12:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:29.921 04:12:44 -- host/auth.sh@44 -- # keyid=1 00:21:29.921 04:12:44 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:29.921 04:12:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:29.921 04:12:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:29.921 04:12:44 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:29.921 04:12:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:21:29.921 04:12:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:29.921 04:12:44 -- host/auth.sh@68 -- # digest=sha384 00:21:29.921 04:12:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:29.921 04:12:44 -- host/auth.sh@68 -- # keyid=1 00:21:29.921 04:12:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:29.921 04:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.921 04:12:44 -- common/autotest_common.sh@10 -- # set +x 00:21:29.921 04:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.921 04:12:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:29.921 04:12:44 -- nvmf/common.sh@717 -- # local ip 00:21:29.921 04:12:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:29.921 04:12:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:29.921 04:12:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.921 04:12:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.921 04:12:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:29.921 04:12:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:29.921 04:12:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:29.921 04:12:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:29.921 04:12:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:29.921 04:12:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:29.921 04:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.921 04:12:44 -- common/autotest_common.sh@10 -- # set +x 00:21:30.178 nvme0n1 00:21:30.178 04:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.178 04:12:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.178 04:12:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:30.178 04:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.178 04:12:44 -- common/autotest_common.sh@10 -- # set +x 00:21:30.178 04:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.178 04:12:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.178 04:12:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.178 04:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.178 04:12:44 -- common/autotest_common.sh@10 -- # set +x 00:21:30.178 04:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.178 04:12:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:30.178 04:12:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:30.178 04:12:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:30.178 04:12:44 -- host/auth.sh@44 -- # digest=sha384 00:21:30.178 04:12:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:30.178 04:12:44 -- host/auth.sh@44 -- # keyid=2 00:21:30.178 04:12:44 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:30.178 04:12:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:30.178 04:12:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:30.179 04:12:44 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:30.179 04:12:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:21:30.179 04:12:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:30.179 04:12:44 -- host/auth.sh@68 -- # digest=sha384 00:21:30.179 04:12:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:30.179 04:12:44 -- host/auth.sh@68 -- # keyid=2 00:21:30.179 04:12:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.179 04:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.179 04:12:44 -- common/autotest_common.sh@10 -- # set +x 00:21:30.179 04:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.179 04:12:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:30.179 04:12:44 -- nvmf/common.sh@717 -- # local ip 00:21:30.179 04:12:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:30.179 04:12:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:30.179 04:12:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.179 04:12:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.179 04:12:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:30.179 04:12:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:30.179 04:12:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:30.179 04:12:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:30.179 04:12:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:30.179 04:12:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:30.179 04:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.179 04:12:44 -- common/autotest_common.sh@10 -- # set +x 00:21:30.746 nvme0n1 00:21:30.746 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.746 04:12:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.746 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.746 04:12:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:30.746 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:30.746 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.746 04:12:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.746 04:12:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.746 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.746 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:30.746 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.746 04:12:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:30.746 04:12:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:30.746 04:12:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:30.746 04:12:45 -- host/auth.sh@44 -- # digest=sha384 00:21:30.746 04:12:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:30.746 04:12:45 -- host/auth.sh@44 -- # keyid=3 00:21:30.746 04:12:45 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:30.746 04:12:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:30.746 04:12:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:30.746 04:12:45 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:30.746 04:12:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:21:30.746 04:12:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:30.746 04:12:45 -- host/auth.sh@68 -- # digest=sha384 00:21:30.746 04:12:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:30.746 04:12:45 -- host/auth.sh@68 -- # keyid=3 00:21:30.746 04:12:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.746 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.746 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:30.746 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.746 04:12:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:30.746 04:12:45 -- nvmf/common.sh@717 -- # local ip 00:21:30.746 04:12:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:30.746 04:12:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:30.746 04:12:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.746 04:12:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.746 04:12:45 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:30.746 04:12:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:30.746 04:12:45 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:30.746 04:12:45 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:30.746 04:12:45 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:30.746 04:12:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:30.746 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.746 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.004 nvme0n1 00:21:31.004 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.004 04:12:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.004 04:12:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:31.004 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.004 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.004 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.004 04:12:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.004 04:12:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.004 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.004 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.004 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.263 04:12:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:31.263 04:12:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:31.263 04:12:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:31.263 04:12:45 -- host/auth.sh@44 -- # digest=sha384 00:21:31.263 04:12:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:31.263 04:12:45 -- host/auth.sh@44 -- # keyid=4 00:21:31.263 04:12:45 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:31.263 04:12:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:31.263 04:12:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:31.263 04:12:45 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:31.263 04:12:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:21:31.263 04:12:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:31.263 04:12:45 -- host/auth.sh@68 -- # digest=sha384 00:21:31.263 04:12:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:31.263 04:12:45 -- host/auth.sh@68 -- # keyid=4 00:21:31.263 04:12:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:31.263 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.263 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.263 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.263 04:12:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:31.263 04:12:45 -- nvmf/common.sh@717 -- # local ip 00:21:31.263 04:12:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:31.263 04:12:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:31.263 04:12:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.263 04:12:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.263 04:12:45 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:31.263 04:12:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:31.263 04:12:45 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:31.263 04:12:45 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:31.263 04:12:45 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:31.263 04:12:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:31.263 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.263 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.521 nvme0n1 00:21:31.521 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.521 04:12:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.521 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.521 04:12:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:31.521 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.521 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.521 04:12:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.521 04:12:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.521 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.521 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.521 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.521 04:12:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.521 04:12:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:31.521 04:12:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:31.521 04:12:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:31.521 04:12:45 -- host/auth.sh@44 -- # digest=sha384 00:21:31.521 04:12:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:31.521 04:12:45 -- host/auth.sh@44 -- # keyid=0 00:21:31.521 04:12:45 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:31.521 04:12:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:31.521 04:12:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:31.521 04:12:45 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:31.521 04:12:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:21:31.521 04:12:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:31.521 04:12:45 -- host/auth.sh@68 -- # digest=sha384 00:21:31.521 04:12:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:31.521 04:12:45 -- host/auth.sh@68 -- # keyid=0 00:21:31.521 04:12:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.521 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.521 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:31.521 04:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:31.521 04:12:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:31.521 04:12:45 -- nvmf/common.sh@717 -- # local ip 00:21:31.521 04:12:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:31.522 04:12:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:31.522 04:12:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.522 04:12:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.522 04:12:45 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:31.522 04:12:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:31.522 04:12:45 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:31.522 04:12:45 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:31.522 04:12:45 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:31.522 04:12:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:31.522 04:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:31.522 04:12:45 -- common/autotest_common.sh@10 -- # set +x 00:21:32.088 nvme0n1 00:21:32.088 04:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.088 04:12:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.088 04:12:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:32.088 04:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.088 04:12:46 -- common/autotest_common.sh@10 -- # set +x 00:21:32.088 04:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.088 04:12:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.088 04:12:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.088 04:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.088 04:12:46 -- common/autotest_common.sh@10 -- # set +x 00:21:32.088 04:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.088 04:12:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:32.088 04:12:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:32.088 04:12:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:32.088 04:12:46 -- host/auth.sh@44 -- # digest=sha384 00:21:32.088 04:12:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:32.088 04:12:46 -- host/auth.sh@44 -- # keyid=1 00:21:32.088 04:12:46 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:32.088 04:12:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:32.088 04:12:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:32.088 04:12:46 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:32.088 04:12:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:21:32.088 04:12:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:32.088 04:12:46 -- host/auth.sh@68 -- # digest=sha384 00:21:32.088 04:12:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:32.088 04:12:46 -- host/auth.sh@68 -- # keyid=1 00:21:32.088 04:12:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.088 04:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.088 04:12:46 -- common/autotest_common.sh@10 -- # set +x 00:21:32.088 04:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.088 04:12:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:32.088 04:12:46 -- nvmf/common.sh@717 -- # local ip 00:21:32.088 04:12:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:32.088 04:12:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:32.088 04:12:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.088 04:12:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.088 04:12:46 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:32.088 04:12:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:32.088 04:12:46 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:32.088 04:12:46 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:32.088 04:12:46 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:32.088 04:12:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:32.088 04:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.088 04:12:46 -- common/autotest_common.sh@10 -- # set +x 00:21:32.654 nvme0n1 00:21:32.654 04:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.654 04:12:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.654 04:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.654 04:12:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:32.654 04:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:32.654 04:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.654 04:12:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.654 04:12:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.654 04:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.654 04:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:32.913 04:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.913 04:12:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:32.913 04:12:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:32.913 04:12:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:32.913 04:12:47 -- host/auth.sh@44 -- # digest=sha384 00:21:32.913 04:12:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:32.913 04:12:47 -- host/auth.sh@44 -- # keyid=2 00:21:32.913 04:12:47 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:32.913 04:12:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:32.913 04:12:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:32.913 04:12:47 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:32.913 04:12:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:21:32.913 04:12:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:32.913 04:12:47 -- host/auth.sh@68 -- # digest=sha384 00:21:32.913 04:12:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:32.913 04:12:47 -- host/auth.sh@68 -- # keyid=2 00:21:32.913 04:12:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.913 04:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.913 04:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:32.913 04:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.913 04:12:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:32.913 04:12:47 -- nvmf/common.sh@717 -- # local ip 00:21:32.913 04:12:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:32.913 04:12:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:32.913 04:12:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.913 04:12:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.913 04:12:47 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:32.913 04:12:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:32.913 04:12:47 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:32.913 04:12:47 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:32.913 04:12:47 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:32.913 04:12:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:32.913 04:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.913 04:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:33.482 nvme0n1 00:21:33.482 04:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.482 04:12:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.482 04:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.482 04:12:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:33.482 04:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:33.482 04:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.482 04:12:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.482 04:12:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.482 04:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.482 04:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:33.482 04:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.482 04:12:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:33.482 04:12:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:33.482 04:12:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:33.482 04:12:47 -- host/auth.sh@44 -- # digest=sha384 00:21:33.482 04:12:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:33.482 04:12:47 -- host/auth.sh@44 -- # keyid=3 00:21:33.482 04:12:47 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:33.482 04:12:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:33.482 04:12:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:33.482 04:12:47 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:33.482 04:12:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:21:33.482 04:12:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:33.482 04:12:47 -- host/auth.sh@68 -- # digest=sha384 00:21:33.482 04:12:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:33.482 04:12:47 -- host/auth.sh@68 -- # keyid=3 00:21:33.482 04:12:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.482 04:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.482 04:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:33.482 04:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.482 04:12:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:33.482 04:12:47 -- nvmf/common.sh@717 -- # local ip 00:21:33.482 04:12:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:33.482 04:12:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:33.482 04:12:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.482 04:12:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.482 04:12:47 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:33.482 04:12:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:33.482 04:12:47 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:33.482 04:12:47 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:33.482 04:12:47 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:33.482 04:12:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:33.482 04:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.482 04:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:34.048 nvme0n1 00:21:34.048 04:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.048 04:12:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.048 04:12:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:34.048 04:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.048 04:12:48 -- common/autotest_common.sh@10 -- # set +x 00:21:34.048 04:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.048 04:12:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.048 04:12:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.048 04:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.048 04:12:48 -- common/autotest_common.sh@10 -- # set +x 00:21:34.048 04:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.048 04:12:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:34.048 04:12:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:34.048 04:12:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:34.048 04:12:48 -- host/auth.sh@44 -- # digest=sha384 00:21:34.048 04:12:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:34.048 04:12:48 -- host/auth.sh@44 -- # keyid=4 00:21:34.048 04:12:48 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:34.048 04:12:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:21:34.048 04:12:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:34.048 04:12:48 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:34.048 04:12:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:21:34.048 04:12:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:34.048 04:12:48 -- host/auth.sh@68 -- # digest=sha384 00:21:34.048 04:12:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:34.048 04:12:48 -- host/auth.sh@68 -- # keyid=4 00:21:34.048 04:12:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.048 04:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.048 04:12:48 -- common/autotest_common.sh@10 -- # set +x 00:21:34.048 04:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.048 04:12:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:34.048 04:12:48 -- nvmf/common.sh@717 -- # local ip 00:21:34.048 04:12:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:34.049 04:12:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:34.049 04:12:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.049 04:12:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.049 04:12:48 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:34.049 04:12:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:34.049 04:12:48 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:34.049 04:12:48 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:34.049 04:12:48 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:34.049 04:12:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:34.049 04:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.049 04:12:48 -- common/autotest_common.sh@10 -- # set +x 00:21:34.613 nvme0n1 00:21:34.613 04:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.613 04:12:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.613 04:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.613 04:12:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:34.613 04:12:48 -- common/autotest_common.sh@10 -- # set +x 00:21:34.613 04:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.613 04:12:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.613 04:12:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.613 04:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.613 04:12:48 -- common/autotest_common.sh@10 -- # set +x 00:21:34.613 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.613 04:12:49 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:21:34.613 04:12:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.613 04:12:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:34.613 04:12:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:34.613 04:12:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:34.613 04:12:49 -- host/auth.sh@44 -- # digest=sha512 00:21:34.613 04:12:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:34.613 04:12:49 -- host/auth.sh@44 -- # keyid=0 00:21:34.613 04:12:49 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:34.613 04:12:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:34.613 04:12:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:34.613 04:12:49 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:34.613 04:12:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:21:34.613 04:12:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:34.613 04:12:49 -- host/auth.sh@68 -- # digest=sha512 00:21:34.613 04:12:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:34.613 04:12:49 -- host/auth.sh@68 -- # keyid=0 00:21:34.613 04:12:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.613 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.613 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:34.613 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.613 04:12:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:34.613 04:12:49 -- nvmf/common.sh@717 -- # local ip 00:21:34.613 04:12:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:34.613 04:12:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:34.613 04:12:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.613 04:12:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.613 04:12:49 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:34.613 04:12:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:34.613 04:12:49 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:34.613 04:12:49 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:34.613 04:12:49 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:34.613 04:12:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:34.613 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.613 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:34.870 nvme0n1 00:21:34.870 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.870 04:12:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.870 04:12:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:34.870 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.870 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:34.870 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.870 04:12:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.870 04:12:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.870 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.870 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:34.870 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.870 04:12:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:34.870 04:12:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:34.870 04:12:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:34.870 04:12:49 -- host/auth.sh@44 -- # digest=sha512 00:21:34.870 04:12:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:34.870 04:12:49 -- host/auth.sh@44 -- # keyid=1 00:21:34.870 04:12:49 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:34.870 04:12:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:34.870 04:12:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:34.870 04:12:49 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:34.870 04:12:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:21:34.870 04:12:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:34.870 04:12:49 -- host/auth.sh@68 -- # digest=sha512 00:21:34.870 04:12:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:34.870 04:12:49 -- host/auth.sh@68 -- # keyid=1 00:21:34.870 04:12:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.870 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.870 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:34.870 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.870 04:12:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:34.871 04:12:49 -- nvmf/common.sh@717 -- # local ip 00:21:34.871 04:12:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:34.871 04:12:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:34.871 04:12:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.871 04:12:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.871 04:12:49 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:34.871 04:12:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:34.871 04:12:49 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:34.871 04:12:49 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:34.871 04:12:49 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:34.871 04:12:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:34.871 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.871 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.129 nvme0n1 00:21:35.129 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.129 04:12:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.129 04:12:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:35.129 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.129 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.129 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.129 04:12:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.129 04:12:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.129 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.129 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.129 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.129 04:12:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:35.129 04:12:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:35.129 04:12:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:35.129 04:12:49 -- host/auth.sh@44 -- # digest=sha512 00:21:35.129 04:12:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:35.129 04:12:49 -- host/auth.sh@44 -- # keyid=2 00:21:35.129 04:12:49 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:35.129 04:12:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:35.129 04:12:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:35.129 04:12:49 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:35.129 04:12:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:21:35.129 04:12:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:35.129 04:12:49 -- host/auth.sh@68 -- # digest=sha512 00:21:35.129 04:12:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:35.129 04:12:49 -- host/auth.sh@68 -- # keyid=2 00:21:35.129 04:12:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.129 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.129 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.129 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.129 04:12:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:35.129 04:12:49 -- nvmf/common.sh@717 -- # local ip 00:21:35.129 04:12:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:35.129 04:12:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:35.129 04:12:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.129 04:12:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.129 04:12:49 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:35.129 04:12:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:35.129 04:12:49 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:35.129 04:12:49 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:35.129 04:12:49 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:35.129 04:12:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:35.129 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.129 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.387 nvme0n1 00:21:35.387 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.387 04:12:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.387 04:12:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:35.387 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.387 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.387 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.387 04:12:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.387 04:12:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.387 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.387 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.387 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.387 04:12:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:35.387 04:12:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:35.387 04:12:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:35.387 04:12:49 -- host/auth.sh@44 -- # digest=sha512 00:21:35.387 04:12:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:35.387 04:12:49 -- host/auth.sh@44 -- # keyid=3 00:21:35.387 04:12:49 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:35.387 04:12:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:35.387 04:12:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:35.387 04:12:49 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:35.387 04:12:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:21:35.387 04:12:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:35.387 04:12:49 -- host/auth.sh@68 -- # digest=sha512 00:21:35.387 04:12:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:35.387 04:12:49 -- host/auth.sh@68 -- # keyid=3 00:21:35.387 04:12:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.387 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.387 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.387 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.387 04:12:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:35.387 04:12:49 -- nvmf/common.sh@717 -- # local ip 00:21:35.387 04:12:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:35.387 04:12:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:35.387 04:12:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.387 04:12:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.387 04:12:49 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:35.387 04:12:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:35.387 04:12:49 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:35.387 04:12:49 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:35.387 04:12:49 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:35.387 04:12:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:35.388 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.388 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.646 nvme0n1 00:21:35.646 04:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.646 04:12:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.646 04:12:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:35.646 04:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.646 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:35.646 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.646 04:12:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.646 04:12:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.646 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.646 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.646 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.646 04:12:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:35.646 04:12:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:35.646 04:12:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:35.646 04:12:50 -- host/auth.sh@44 -- # digest=sha512 00:21:35.646 04:12:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:35.646 04:12:50 -- host/auth.sh@44 -- # keyid=4 00:21:35.646 04:12:50 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:35.646 04:12:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:35.646 04:12:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:35.646 04:12:50 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:35.646 04:12:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:21:35.646 04:12:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:35.646 04:12:50 -- host/auth.sh@68 -- # digest=sha512 00:21:35.646 04:12:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:21:35.646 04:12:50 -- host/auth.sh@68 -- # keyid=4 00:21:35.646 04:12:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.646 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.646 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.646 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.646 04:12:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:35.646 04:12:50 -- nvmf/common.sh@717 -- # local ip 00:21:35.646 04:12:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:35.646 04:12:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:35.646 04:12:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.646 04:12:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.646 04:12:50 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:35.646 04:12:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:35.646 04:12:50 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:35.646 04:12:50 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:35.646 04:12:50 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:35.646 04:12:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:35.646 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.646 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.905 nvme0n1 00:21:35.905 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.905 04:12:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:35.905 04:12:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.905 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.905 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.905 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.905 04:12:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.905 04:12:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.905 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.905 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.905 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.905 04:12:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.905 04:12:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:35.905 04:12:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:35.905 04:12:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:35.905 04:12:50 -- host/auth.sh@44 -- # digest=sha512 00:21:35.905 04:12:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:35.905 04:12:50 -- host/auth.sh@44 -- # keyid=0 00:21:35.905 04:12:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:35.906 04:12:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:35.906 04:12:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:35.906 04:12:50 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:35.906 04:12:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:21:35.906 04:12:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:35.906 04:12:50 -- host/auth.sh@68 -- # digest=sha512 00:21:35.906 04:12:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:35.906 04:12:50 -- host/auth.sh@68 -- # keyid=0 00:21:35.906 04:12:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.906 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.906 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.906 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.906 04:12:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:35.906 04:12:50 -- nvmf/common.sh@717 -- # local ip 00:21:35.906 04:12:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:35.906 04:12:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:35.906 04:12:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.906 04:12:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.906 04:12:50 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:35.906 04:12:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:35.906 04:12:50 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:35.906 04:12:50 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:35.906 04:12:50 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:35.906 04:12:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:35.906 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.906 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.165 nvme0n1 00:21:36.165 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.165 04:12:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.165 04:12:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:36.165 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.165 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.165 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.165 04:12:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.165 04:12:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.165 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.165 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.165 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.165 04:12:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:36.165 04:12:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:36.165 04:12:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:36.165 04:12:50 -- host/auth.sh@44 -- # digest=sha512 00:21:36.165 04:12:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:36.165 04:12:50 -- host/auth.sh@44 -- # keyid=1 00:21:36.165 04:12:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:36.165 04:12:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:36.165 04:12:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:36.165 04:12:50 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:36.165 04:12:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:21:36.165 04:12:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:36.165 04:12:50 -- host/auth.sh@68 -- # digest=sha512 00:21:36.165 04:12:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:36.165 04:12:50 -- host/auth.sh@68 -- # keyid=1 00:21:36.165 04:12:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.165 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.165 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.165 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.165 04:12:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:36.165 04:12:50 -- nvmf/common.sh@717 -- # local ip 00:21:36.165 04:12:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:36.165 04:12:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:36.165 04:12:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.165 04:12:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.165 04:12:50 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:36.165 04:12:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:36.165 04:12:50 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:36.165 04:12:50 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:36.165 04:12:50 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:36.166 04:12:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:36.166 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.166 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.425 nvme0n1 00:21:36.425 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.425 04:12:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:36.425 04:12:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.425 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.425 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.425 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.425 04:12:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.425 04:12:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.425 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.425 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.425 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.425 04:12:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:36.425 04:12:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:36.425 04:12:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:36.425 04:12:50 -- host/auth.sh@44 -- # digest=sha512 00:21:36.425 04:12:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:36.425 04:12:50 -- host/auth.sh@44 -- # keyid=2 00:21:36.425 04:12:50 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:36.425 04:12:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:36.425 04:12:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:36.425 04:12:50 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:36.425 04:12:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:21:36.425 04:12:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:36.425 04:12:50 -- host/auth.sh@68 -- # digest=sha512 00:21:36.425 04:12:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:36.425 04:12:50 -- host/auth.sh@68 -- # keyid=2 00:21:36.425 04:12:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.425 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.425 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.425 04:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.425 04:12:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:36.425 04:12:50 -- nvmf/common.sh@717 -- # local ip 00:21:36.425 04:12:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:36.425 04:12:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:36.425 04:12:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.425 04:12:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.425 04:12:50 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:36.425 04:12:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:36.425 04:12:50 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:36.425 04:12:50 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:36.425 04:12:50 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:36.425 04:12:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:36.425 04:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.425 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.685 nvme0n1 00:21:36.685 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.685 04:12:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.685 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.685 04:12:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:36.685 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.685 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.685 04:12:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.685 04:12:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.685 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.685 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.685 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.685 04:12:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:36.685 04:12:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:36.685 04:12:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:36.685 04:12:51 -- host/auth.sh@44 -- # digest=sha512 00:21:36.685 04:12:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:36.685 04:12:51 -- host/auth.sh@44 -- # keyid=3 00:21:36.685 04:12:51 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:36.685 04:12:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:36.685 04:12:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:36.685 04:12:51 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:36.685 04:12:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:21:36.685 04:12:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:36.685 04:12:51 -- host/auth.sh@68 -- # digest=sha512 00:21:36.685 04:12:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:36.685 04:12:51 -- host/auth.sh@68 -- # keyid=3 00:21:36.685 04:12:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.685 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.685 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.685 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.685 04:12:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:36.685 04:12:51 -- nvmf/common.sh@717 -- # local ip 00:21:36.685 04:12:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:36.685 04:12:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:36.685 04:12:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.685 04:12:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.685 04:12:51 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:36.685 04:12:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:36.685 04:12:51 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:36.685 04:12:51 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:36.685 04:12:51 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:36.685 04:12:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:36.685 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.685 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.945 nvme0n1 00:21:36.945 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.945 04:12:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.945 04:12:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:36.945 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.945 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.945 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.945 04:12:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.945 04:12:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.945 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.945 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.945 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.945 04:12:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:36.946 04:12:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:36.946 04:12:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:36.946 04:12:51 -- host/auth.sh@44 -- # digest=sha512 00:21:36.946 04:12:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:36.946 04:12:51 -- host/auth.sh@44 -- # keyid=4 00:21:36.946 04:12:51 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:36.946 04:12:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:36.946 04:12:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:21:36.946 04:12:51 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:36.946 04:12:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:21:36.946 04:12:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:36.946 04:12:51 -- host/auth.sh@68 -- # digest=sha512 00:21:36.946 04:12:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:21:36.946 04:12:51 -- host/auth.sh@68 -- # keyid=4 00:21:36.946 04:12:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.946 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.946 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.946 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.946 04:12:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:36.946 04:12:51 -- nvmf/common.sh@717 -- # local ip 00:21:36.946 04:12:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:36.946 04:12:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:36.946 04:12:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.946 04:12:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.946 04:12:51 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:36.946 04:12:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:36.946 04:12:51 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:36.946 04:12:51 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:36.946 04:12:51 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:36.946 04:12:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:36.946 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.946 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:37.205 nvme0n1 00:21:37.205 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.205 04:12:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.205 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.205 04:12:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:37.205 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:37.205 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.205 04:12:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.205 04:12:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.205 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.205 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:37.205 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.205 04:12:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.205 04:12:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:37.205 04:12:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:37.205 04:12:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:37.205 04:12:51 -- host/auth.sh@44 -- # digest=sha512 00:21:37.205 04:12:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:37.205 04:12:51 -- host/auth.sh@44 -- # keyid=0 00:21:37.205 04:12:51 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:37.205 04:12:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:37.205 04:12:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:37.205 04:12:51 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:37.205 04:12:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:21:37.205 04:12:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:37.205 04:12:51 -- host/auth.sh@68 -- # digest=sha512 00:21:37.205 04:12:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:37.205 04:12:51 -- host/auth.sh@68 -- # keyid=0 00:21:37.205 04:12:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.205 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.205 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:37.205 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.205 04:12:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:37.205 04:12:51 -- nvmf/common.sh@717 -- # local ip 00:21:37.205 04:12:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:37.205 04:12:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:37.205 04:12:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.205 04:12:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.205 04:12:51 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:37.205 04:12:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:37.205 04:12:51 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:37.205 04:12:51 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:37.205 04:12:51 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:37.205 04:12:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:37.205 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.205 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:37.465 nvme0n1 00:21:37.465 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.465 04:12:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.465 04:12:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:37.465 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.465 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:37.465 04:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.465 04:12:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.465 04:12:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.465 04:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.465 04:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:37.724 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.724 04:12:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:37.724 04:12:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:37.724 04:12:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:37.724 04:12:52 -- host/auth.sh@44 -- # digest=sha512 00:21:37.724 04:12:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:37.724 04:12:52 -- host/auth.sh@44 -- # keyid=1 00:21:37.724 04:12:52 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:37.724 04:12:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:37.724 04:12:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:37.724 04:12:52 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:37.724 04:12:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:21:37.724 04:12:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:37.724 04:12:52 -- host/auth.sh@68 -- # digest=sha512 00:21:37.724 04:12:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:37.724 04:12:52 -- host/auth.sh@68 -- # keyid=1 00:21:37.724 04:12:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.724 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.724 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:37.724 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.724 04:12:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:37.724 04:12:52 -- nvmf/common.sh@717 -- # local ip 00:21:37.724 04:12:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:37.724 04:12:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:37.724 04:12:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.724 04:12:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.724 04:12:52 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:37.724 04:12:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:37.724 04:12:52 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:37.724 04:12:52 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:37.724 04:12:52 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:37.724 04:12:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:37.724 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.724 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:37.984 nvme0n1 00:21:37.984 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.984 04:12:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.984 04:12:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:37.984 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.984 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:37.984 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.984 04:12:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.984 04:12:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.984 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.984 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:37.984 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.984 04:12:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:37.984 04:12:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:37.984 04:12:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:37.984 04:12:52 -- host/auth.sh@44 -- # digest=sha512 00:21:37.984 04:12:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:37.984 04:12:52 -- host/auth.sh@44 -- # keyid=2 00:21:37.984 04:12:52 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:37.984 04:12:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:37.984 04:12:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:37.984 04:12:52 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:37.984 04:12:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:21:37.984 04:12:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:37.984 04:12:52 -- host/auth.sh@68 -- # digest=sha512 00:21:37.984 04:12:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:37.984 04:12:52 -- host/auth.sh@68 -- # keyid=2 00:21:37.984 04:12:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.984 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.984 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:37.984 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.984 04:12:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:37.984 04:12:52 -- nvmf/common.sh@717 -- # local ip 00:21:37.984 04:12:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:37.984 04:12:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:37.984 04:12:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.984 04:12:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.984 04:12:52 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:37.984 04:12:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:37.984 04:12:52 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:37.984 04:12:52 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:37.984 04:12:52 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:37.984 04:12:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:37.984 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.984 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:38.244 nvme0n1 00:21:38.244 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.244 04:12:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:38.244 04:12:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.244 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.244 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:38.244 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.244 04:12:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.244 04:12:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.244 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.244 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:38.244 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.244 04:12:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:38.244 04:12:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:38.244 04:12:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:38.244 04:12:52 -- host/auth.sh@44 -- # digest=sha512 00:21:38.244 04:12:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:38.244 04:12:52 -- host/auth.sh@44 -- # keyid=3 00:21:38.244 04:12:52 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:38.244 04:12:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:38.244 04:12:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:38.244 04:12:52 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:38.244 04:12:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:21:38.244 04:12:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:38.244 04:12:52 -- host/auth.sh@68 -- # digest=sha512 00:21:38.244 04:12:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:38.244 04:12:52 -- host/auth.sh@68 -- # keyid=3 00:21:38.244 04:12:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.244 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.244 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:38.244 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.244 04:12:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:38.244 04:12:52 -- nvmf/common.sh@717 -- # local ip 00:21:38.244 04:12:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:38.244 04:12:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:38.244 04:12:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.244 04:12:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.244 04:12:52 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:38.244 04:12:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:38.244 04:12:52 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:38.244 04:12:52 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:38.244 04:12:52 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:38.244 04:12:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:38.244 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.244 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:38.504 nvme0n1 00:21:38.504 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.504 04:12:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.504 04:12:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:38.504 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.504 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:38.504 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.504 04:12:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.504 04:12:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.504 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.504 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:38.504 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.504 04:12:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:38.504 04:12:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:38.504 04:12:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:38.504 04:12:52 -- host/auth.sh@44 -- # digest=sha512 00:21:38.504 04:12:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:38.504 04:12:52 -- host/auth.sh@44 -- # keyid=4 00:21:38.504 04:12:52 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:38.504 04:12:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:38.504 04:12:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:21:38.504 04:12:52 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:38.504 04:12:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:21:38.504 04:12:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:38.504 04:12:52 -- host/auth.sh@68 -- # digest=sha512 00:21:38.504 04:12:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:21:38.504 04:12:52 -- host/auth.sh@68 -- # keyid=4 00:21:38.504 04:12:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.504 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.504 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:38.504 04:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.504 04:12:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:38.504 04:12:52 -- nvmf/common.sh@717 -- # local ip 00:21:38.504 04:12:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:38.504 04:12:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:38.504 04:12:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.504 04:12:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.504 04:12:52 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:38.504 04:12:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:38.504 04:12:52 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:38.504 04:12:52 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:38.504 04:12:52 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:38.504 04:12:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:38.504 04:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.504 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:38.764 nvme0n1 00:21:38.764 04:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.764 04:12:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.764 04:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.764 04:12:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:38.764 04:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:38.764 04:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.764 04:12:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.764 04:12:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.764 04:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.764 04:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:38.764 04:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.764 04:12:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.764 04:12:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:38.764 04:12:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:38.764 04:12:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:38.764 04:12:53 -- host/auth.sh@44 -- # digest=sha512 00:21:38.764 04:12:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:38.764 04:12:53 -- host/auth.sh@44 -- # keyid=0 00:21:38.764 04:12:53 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:38.764 04:12:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:38.764 04:12:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:38.764 04:12:53 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:38.764 04:12:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:21:38.764 04:12:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:38.764 04:12:53 -- host/auth.sh@68 -- # digest=sha512 00:21:38.764 04:12:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:38.764 04:12:53 -- host/auth.sh@68 -- # keyid=0 00:21:38.764 04:12:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:38.764 04:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.764 04:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:38.764 04:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.764 04:12:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:38.764 04:12:53 -- nvmf/common.sh@717 -- # local ip 00:21:39.024 04:12:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:39.024 04:12:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:39.024 04:12:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.024 04:12:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.024 04:12:53 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:39.024 04:12:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:39.024 04:12:53 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:39.024 04:12:53 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:39.024 04:12:53 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:39.024 04:12:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:39.024 04:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.024 04:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:39.284 nvme0n1 00:21:39.284 04:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.284 04:12:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.284 04:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.284 04:12:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:39.284 04:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:39.284 04:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.284 04:12:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.284 04:12:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:39.284 04:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.284 04:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:39.284 04:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.284 04:12:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:39.284 04:12:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:39.284 04:12:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:39.284 04:12:53 -- host/auth.sh@44 -- # digest=sha512 00:21:39.284 04:12:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:39.284 04:12:53 -- host/auth.sh@44 -- # keyid=1 00:21:39.284 04:12:53 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:39.284 04:12:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:39.284 04:12:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:39.284 04:12:53 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:39.284 04:12:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:21:39.284 04:12:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:39.284 04:12:53 -- host/auth.sh@68 -- # digest=sha512 00:21:39.284 04:12:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:39.284 04:12:53 -- host/auth.sh@68 -- # keyid=1 00:21:39.284 04:12:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.284 04:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.284 04:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:39.284 04:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.284 04:12:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:39.284 04:12:53 -- nvmf/common.sh@717 -- # local ip 00:21:39.284 04:12:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:39.284 04:12:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:39.284 04:12:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.284 04:12:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.284 04:12:53 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:39.284 04:12:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:39.284 04:12:53 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:39.284 04:12:53 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:39.284 04:12:53 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:39.284 04:12:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:39.284 04:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.284 04:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:39.544 nvme0n1 00:21:39.803 04:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.803 04:12:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.803 04:12:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:39.803 04:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.803 04:12:54 -- common/autotest_common.sh@10 -- # set +x 00:21:39.803 04:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.804 04:12:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.804 04:12:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:39.804 04:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.804 04:12:54 -- common/autotest_common.sh@10 -- # set +x 00:21:39.804 04:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.804 04:12:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:39.804 04:12:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:39.804 04:12:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:39.804 04:12:54 -- host/auth.sh@44 -- # digest=sha512 00:21:39.804 04:12:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:39.804 04:12:54 -- host/auth.sh@44 -- # keyid=2 00:21:39.804 04:12:54 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:39.804 04:12:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:39.804 04:12:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:39.804 04:12:54 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:39.804 04:12:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:21:39.804 04:12:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:39.804 04:12:54 -- host/auth.sh@68 -- # digest=sha512 00:21:39.804 04:12:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:39.804 04:12:54 -- host/auth.sh@68 -- # keyid=2 00:21:39.804 04:12:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.804 04:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.804 04:12:54 -- common/autotest_common.sh@10 -- # set +x 00:21:39.804 04:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.804 04:12:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:39.804 04:12:54 -- nvmf/common.sh@717 -- # local ip 00:21:39.804 04:12:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:39.804 04:12:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:39.804 04:12:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.804 04:12:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.804 04:12:54 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:39.804 04:12:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:39.804 04:12:54 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:39.804 04:12:54 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:39.804 04:12:54 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:39.804 04:12:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:39.804 04:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.804 04:12:54 -- common/autotest_common.sh@10 -- # set +x 00:21:40.063 nvme0n1 00:21:40.063 04:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.063 04:12:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.064 04:12:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:40.064 04:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.064 04:12:54 -- common/autotest_common.sh@10 -- # set +x 00:21:40.064 04:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.064 04:12:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.064 04:12:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.064 04:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.064 04:12:54 -- common/autotest_common.sh@10 -- # set +x 00:21:40.064 04:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.064 04:12:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:40.064 04:12:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:40.064 04:12:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:40.064 04:12:54 -- host/auth.sh@44 -- # digest=sha512 00:21:40.064 04:12:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:40.064 04:12:54 -- host/auth.sh@44 -- # keyid=3 00:21:40.064 04:12:54 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:40.064 04:12:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:40.064 04:12:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:40.064 04:12:54 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:40.064 04:12:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:21:40.064 04:12:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:40.064 04:12:54 -- host/auth.sh@68 -- # digest=sha512 00:21:40.064 04:12:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:40.064 04:12:54 -- host/auth.sh@68 -- # keyid=3 00:21:40.064 04:12:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.064 04:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.064 04:12:54 -- common/autotest_common.sh@10 -- # set +x 00:21:40.064 04:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.064 04:12:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:40.064 04:12:54 -- nvmf/common.sh@717 -- # local ip 00:21:40.064 04:12:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:40.064 04:12:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:40.064 04:12:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.064 04:12:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.064 04:12:54 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:40.064 04:12:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:40.064 04:12:54 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:40.064 04:12:54 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:40.064 04:12:54 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:40.064 04:12:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:40.064 04:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.064 04:12:54 -- common/autotest_common.sh@10 -- # set +x 00:21:40.633 nvme0n1 00:21:40.633 04:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.633 04:12:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.633 04:12:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:40.633 04:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.633 04:12:54 -- common/autotest_common.sh@10 -- # set +x 00:21:40.633 04:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.633 04:12:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.633 04:12:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.633 04:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.633 04:12:54 -- common/autotest_common.sh@10 -- # set +x 00:21:40.633 04:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.633 04:12:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:40.633 04:12:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:40.633 04:12:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:40.633 04:12:55 -- host/auth.sh@44 -- # digest=sha512 00:21:40.633 04:12:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:40.633 04:12:55 -- host/auth.sh@44 -- # keyid=4 00:21:40.633 04:12:55 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:40.633 04:12:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:40.633 04:12:55 -- host/auth.sh@48 -- # echo ffdhe6144 00:21:40.633 04:12:55 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:40.633 04:12:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:21:40.633 04:12:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:40.633 04:12:55 -- host/auth.sh@68 -- # digest=sha512 00:21:40.633 04:12:55 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:21:40.633 04:12:55 -- host/auth.sh@68 -- # keyid=4 00:21:40.633 04:12:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.633 04:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.633 04:12:55 -- common/autotest_common.sh@10 -- # set +x 00:21:40.633 04:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.633 04:12:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:40.633 04:12:55 -- nvmf/common.sh@717 -- # local ip 00:21:40.633 04:12:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:40.633 04:12:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:40.633 04:12:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.633 04:12:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.633 04:12:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:40.633 04:12:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:40.633 04:12:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:40.633 04:12:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:40.633 04:12:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:40.633 04:12:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:40.633 04:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.633 04:12:55 -- common/autotest_common.sh@10 -- # set +x 00:21:40.891 nvme0n1 00:21:40.891 04:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.891 04:12:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.891 04:12:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:40.891 04:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.891 04:12:55 -- common/autotest_common.sh@10 -- # set +x 00:21:40.891 04:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.891 04:12:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.891 04:12:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.891 04:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.891 04:12:55 -- common/autotest_common.sh@10 -- # set +x 00:21:41.150 04:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.150 04:12:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.150 04:12:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:41.150 04:12:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:41.150 04:12:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:41.150 04:12:55 -- host/auth.sh@44 -- # digest=sha512 00:21:41.150 04:12:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:41.150 04:12:55 -- host/auth.sh@44 -- # keyid=0 00:21:41.150 04:12:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:41.150 04:12:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:41.150 04:12:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:41.150 04:12:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MzlhOThiYjRiYzM3N2RlM2I0MDBhZGY2N2U1MTk5NzZZw4hd: 00:21:41.150 04:12:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:21:41.150 04:12:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:41.150 04:12:55 -- host/auth.sh@68 -- # digest=sha512 00:21:41.150 04:12:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:41.150 04:12:55 -- host/auth.sh@68 -- # keyid=0 00:21:41.150 04:12:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.150 04:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.150 04:12:55 -- common/autotest_common.sh@10 -- # set +x 00:21:41.150 04:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.150 04:12:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:41.150 04:12:55 -- nvmf/common.sh@717 -- # local ip 00:21:41.150 04:12:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:41.150 04:12:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:41.150 04:12:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.150 04:12:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.150 04:12:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:41.150 04:12:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:41.150 04:12:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:41.150 04:12:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:41.150 04:12:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:41.150 04:12:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:21:41.150 04:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.150 04:12:55 -- common/autotest_common.sh@10 -- # set +x 00:21:41.719 nvme0n1 00:21:41.719 04:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.719 04:12:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:41.719 04:12:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.719 04:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.719 04:12:55 -- common/autotest_common.sh@10 -- # set +x 00:21:41.719 04:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.719 04:12:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.719 04:12:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:41.719 04:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.719 04:12:55 -- common/autotest_common.sh@10 -- # set +x 00:21:41.719 04:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.719 04:12:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:41.719 04:12:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:41.719 04:12:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:41.719 04:12:56 -- host/auth.sh@44 -- # digest=sha512 00:21:41.719 04:12:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:41.719 04:12:56 -- host/auth.sh@44 -- # keyid=1 00:21:41.719 04:12:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:41.719 04:12:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:41.719 04:12:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:41.719 04:12:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:41.719 04:12:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:21:41.719 04:12:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:41.719 04:12:56 -- host/auth.sh@68 -- # digest=sha512 00:21:41.719 04:12:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:41.719 04:12:56 -- host/auth.sh@68 -- # keyid=1 00:21:41.719 04:12:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.719 04:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.719 04:12:56 -- common/autotest_common.sh@10 -- # set +x 00:21:41.719 04:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.719 04:12:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:41.719 04:12:56 -- nvmf/common.sh@717 -- # local ip 00:21:41.719 04:12:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:41.719 04:12:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:41.719 04:12:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.719 04:12:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.719 04:12:56 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:41.719 04:12:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:41.720 04:12:56 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:41.720 04:12:56 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:41.720 04:12:56 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:41.720 04:12:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:21:41.720 04:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.720 04:12:56 -- common/autotest_common.sh@10 -- # set +x 00:21:42.290 nvme0n1 00:21:42.290 04:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.290 04:12:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.290 04:12:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:42.290 04:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.290 04:12:56 -- common/autotest_common.sh@10 -- # set +x 00:21:42.290 04:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.290 04:12:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.290 04:12:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.290 04:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.290 04:12:56 -- common/autotest_common.sh@10 -- # set +x 00:21:42.290 04:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.290 04:12:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:42.290 04:12:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:42.290 04:12:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:42.290 04:12:56 -- host/auth.sh@44 -- # digest=sha512 00:21:42.290 04:12:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:42.290 04:12:56 -- host/auth.sh@44 -- # keyid=2 00:21:42.290 04:12:56 -- host/auth.sh@45 -- # key=DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:42.290 04:12:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:42.290 04:12:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:42.290 04:12:56 -- host/auth.sh@49 -- # echo DHHC-1:01:YmY1ZmYzMjlmYTY5ZjBkOTRiMDRmZWU5OWUyYmFkODAeZXN8: 00:21:42.290 04:12:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:21:42.290 04:12:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:42.290 04:12:56 -- host/auth.sh@68 -- # digest=sha512 00:21:42.290 04:12:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:42.290 04:12:56 -- host/auth.sh@68 -- # keyid=2 00:21:42.290 04:12:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.290 04:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.290 04:12:56 -- common/autotest_common.sh@10 -- # set +x 00:21:42.290 04:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.290 04:12:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:42.290 04:12:56 -- nvmf/common.sh@717 -- # local ip 00:21:42.290 04:12:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:42.290 04:12:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:42.290 04:12:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.290 04:12:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.290 04:12:56 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:42.290 04:12:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:42.290 04:12:56 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:42.290 04:12:56 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:42.290 04:12:56 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:42.290 04:12:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:42.290 04:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.290 04:12:56 -- common/autotest_common.sh@10 -- # set +x 00:21:42.859 nvme0n1 00:21:42.859 04:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.859 04:12:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.859 04:12:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:42.859 04:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.859 04:12:57 -- common/autotest_common.sh@10 -- # set +x 00:21:42.859 04:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.859 04:12:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.859 04:12:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.859 04:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.859 04:12:57 -- common/autotest_common.sh@10 -- # set +x 00:21:42.859 04:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.859 04:12:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:42.859 04:12:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:42.859 04:12:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:42.859 04:12:57 -- host/auth.sh@44 -- # digest=sha512 00:21:42.859 04:12:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:42.859 04:12:57 -- host/auth.sh@44 -- # keyid=3 00:21:42.859 04:12:57 -- host/auth.sh@45 -- # key=DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:42.859 04:12:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:42.859 04:12:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:42.859 04:12:57 -- host/auth.sh@49 -- # echo DHHC-1:02:MjIyMzRhYmQ5ZGUzNmY3NmZhNzhjYjE3YWY4MDE0ZjFjZDI5MGRiNzM1YjgxODEwPDc8UA==: 00:21:42.859 04:12:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:21:42.859 04:12:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:42.859 04:12:57 -- host/auth.sh@68 -- # digest=sha512 00:21:42.859 04:12:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:42.859 04:12:57 -- host/auth.sh@68 -- # keyid=3 00:21:42.859 04:12:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.859 04:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.859 04:12:57 -- common/autotest_common.sh@10 -- # set +x 00:21:42.859 04:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.859 04:12:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:42.859 04:12:57 -- nvmf/common.sh@717 -- # local ip 00:21:42.859 04:12:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:42.859 04:12:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:42.859 04:12:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.859 04:12:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.859 04:12:57 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:42.859 04:12:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:42.859 04:12:57 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:42.859 04:12:57 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:42.859 04:12:57 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:42.859 04:12:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:42.859 04:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.859 04:12:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.429 nvme0n1 00:21:43.429 04:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.429 04:12:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.429 04:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.429 04:12:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:43.429 04:12:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.429 04:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.429 04:12:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.429 04:12:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.429 04:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.429 04:12:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.429 04:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.429 04:12:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:43.429 04:12:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:43.429 04:12:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:43.429 04:12:57 -- host/auth.sh@44 -- # digest=sha512 00:21:43.429 04:12:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:43.429 04:12:57 -- host/auth.sh@44 -- # keyid=4 00:21:43.429 04:12:57 -- host/auth.sh@45 -- # key=DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:43.429 04:12:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:43.429 04:12:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:43.429 04:12:57 -- host/auth.sh@49 -- # echo DHHC-1:03:MmRhM2Y2NjkwZDMzY2QyYzQzYzg5NzFkOWY5OGE1NDExMjZkNWE4OTc1Nzg3YTk2YzU5NDUyOWRkZGNjZDJmZXfrtIU=: 00:21:43.429 04:12:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:21:43.429 04:12:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:43.429 04:12:57 -- host/auth.sh@68 -- # digest=sha512 00:21:43.429 04:12:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:43.429 04:12:57 -- host/auth.sh@68 -- # keyid=4 00:21:43.429 04:12:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.429 04:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.429 04:12:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.429 04:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.429 04:12:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:43.429 04:12:57 -- nvmf/common.sh@717 -- # local ip 00:21:43.429 04:12:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:43.429 04:12:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:43.429 04:12:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.429 04:12:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.429 04:12:57 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:43.429 04:12:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:43.429 04:12:57 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:43.429 04:12:57 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:43.429 04:12:57 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:43.429 04:12:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:43.429 04:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.429 04:12:57 -- common/autotest_common.sh@10 -- # set +x 00:21:43.998 nvme0n1 00:21:43.998 04:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.999 04:12:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.999 04:12:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:43.999 04:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.999 04:12:58 -- common/autotest_common.sh@10 -- # set +x 00:21:43.999 04:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.999 04:12:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.999 04:12:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.999 04:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.999 04:12:58 -- common/autotest_common.sh@10 -- # set +x 00:21:43.999 04:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.999 04:12:58 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:43.999 04:12:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:43.999 04:12:58 -- host/auth.sh@44 -- # digest=sha256 00:21:43.999 04:12:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:43.999 04:12:58 -- host/auth.sh@44 -- # keyid=1 00:21:43.999 04:12:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:43.999 04:12:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:43.999 04:12:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:43.999 04:12:58 -- host/auth.sh@49 -- # echo DHHC-1:00:MzA5Y2E0OGFjNzgwMjkwY2M4NWFiYTQ0ZjQ1MTAxZGJiMTNkNWJmZmNiMWY1YTQxZaSq5A==: 00:21:43.999 04:12:58 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.999 04:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.999 04:12:58 -- common/autotest_common.sh@10 -- # set +x 00:21:43.999 04:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.999 04:12:58 -- host/auth.sh@119 -- # get_main_ns_ip 00:21:43.999 04:12:58 -- nvmf/common.sh@717 -- # local ip 00:21:43.999 04:12:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:43.999 04:12:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:43.999 04:12:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.999 04:12:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.999 04:12:58 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:43.999 04:12:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:43.999 04:12:58 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:43.999 04:12:58 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:43.999 04:12:58 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:43.999 04:12:58 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:43.999 04:12:58 -- common/autotest_common.sh@638 -- # local es=0 00:21:43.999 04:12:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:43.999 04:12:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:43.999 04:12:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:43.999 04:12:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:43.999 04:12:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:43.999 04:12:58 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:43.999 04:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.999 04:12:58 -- common/autotest_common.sh@10 -- # set +x 00:21:43.999 request: 00:21:43.999 { 00:21:43.999 "name": "nvme0", 00:21:43.999 "trtype": "rdma", 00:21:43.999 "traddr": "192.168.100.8", 00:21:43.999 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:43.999 "adrfam": "ipv4", 00:21:43.999 "trsvcid": "4420", 00:21:43.999 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:43.999 "method": "bdev_nvme_attach_controller", 00:21:43.999 "req_id": 1 00:21:43.999 } 00:21:43.999 Got JSON-RPC error response 00:21:43.999 response: 00:21:43.999 { 00:21:43.999 "code": -32602, 00:21:43.999 "message": "Invalid parameters" 00:21:43.999 } 00:21:43.999 04:12:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:43.999 04:12:58 -- common/autotest_common.sh@641 -- # es=1 00:21:43.999 04:12:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:43.999 04:12:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:43.999 04:12:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:43.999 04:12:58 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.999 04:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.999 04:12:58 -- host/auth.sh@121 -- # jq length 00:21:43.999 04:12:58 -- common/autotest_common.sh@10 -- # set +x 00:21:44.259 04:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.259 04:12:58 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:21:44.259 04:12:58 -- host/auth.sh@124 -- # get_main_ns_ip 00:21:44.259 04:12:58 -- nvmf/common.sh@717 -- # local ip 00:21:44.259 04:12:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:44.259 04:12:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:44.259 04:12:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.259 04:12:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.259 04:12:58 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:21:44.259 04:12:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:44.259 04:12:58 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:21:44.259 04:12:58 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:21:44.259 04:12:58 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:21:44.259 04:12:58 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:44.259 04:12:58 -- common/autotest_common.sh@638 -- # local es=0 00:21:44.259 04:12:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:44.259 04:12:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:44.259 04:12:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:44.259 04:12:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:44.259 04:12:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:44.259 04:12:58 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:44.259 04:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.259 04:12:58 -- common/autotest_common.sh@10 -- # set +x 00:21:44.259 request: 00:21:44.259 { 00:21:44.259 "name": "nvme0", 00:21:44.259 "trtype": "rdma", 00:21:44.259 "traddr": "192.168.100.8", 00:21:44.259 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:44.259 "adrfam": "ipv4", 00:21:44.259 "trsvcid": "4420", 00:21:44.259 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:44.259 "dhchap_key": "key2", 00:21:44.259 "method": "bdev_nvme_attach_controller", 00:21:44.259 "req_id": 1 00:21:44.259 } 00:21:44.259 Got JSON-RPC error response 00:21:44.259 response: 00:21:44.259 { 00:21:44.259 "code": -32602, 00:21:44.259 "message": "Invalid parameters" 00:21:44.259 } 00:21:44.259 04:12:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:44.259 04:12:58 -- common/autotest_common.sh@641 -- # es=1 00:21:44.259 04:12:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:44.259 04:12:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:44.259 04:12:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:44.259 04:12:58 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.259 04:12:58 -- host/auth.sh@127 -- # jq length 00:21:44.259 04:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.259 04:12:58 -- common/autotest_common.sh@10 -- # set +x 00:21:44.259 04:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.259 04:12:58 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:21:44.259 04:12:58 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:21:44.259 04:12:58 -- host/auth.sh@130 -- # cleanup 00:21:44.259 04:12:58 -- host/auth.sh@24 -- # nvmftestfini 00:21:44.259 04:12:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:44.259 04:12:58 -- nvmf/common.sh@117 -- # sync 00:21:44.259 04:12:58 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:44.259 04:12:58 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:44.259 04:12:58 -- nvmf/common.sh@120 -- # set +e 00:21:44.259 04:12:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.259 04:12:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:44.259 rmmod nvme_rdma 00:21:44.259 rmmod nvme_fabrics 00:21:44.259 04:12:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.259 04:12:58 -- nvmf/common.sh@124 -- # set -e 00:21:44.259 04:12:58 -- nvmf/common.sh@125 -- # return 0 00:21:44.259 04:12:58 -- nvmf/common.sh@478 -- # '[' -n 391687 ']' 00:21:44.259 04:12:58 -- nvmf/common.sh@479 -- # killprocess 391687 00:21:44.259 04:12:58 -- common/autotest_common.sh@936 -- # '[' -z 391687 ']' 00:21:44.259 04:12:58 -- common/autotest_common.sh@940 -- # kill -0 391687 00:21:44.259 04:12:58 -- common/autotest_common.sh@941 -- # uname 00:21:44.259 04:12:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:44.259 04:12:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 391687 00:21:44.520 04:12:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:44.520 04:12:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:44.520 04:12:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 391687' 00:21:44.520 killing process with pid 391687 00:21:44.520 04:12:58 -- common/autotest_common.sh@955 -- # kill 391687 00:21:44.520 04:12:58 -- common/autotest_common.sh@960 -- # wait 391687 00:21:44.520 04:12:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:44.520 04:12:59 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:44.520 04:12:59 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:44.520 04:12:59 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:44.520 04:12:59 -- host/auth.sh@27 -- # clean_kernel_target 00:21:44.520 04:12:59 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:44.520 04:12:59 -- nvmf/common.sh@675 -- # echo 0 00:21:44.520 04:12:59 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:44.520 04:12:59 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:44.520 04:12:59 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:44.520 04:12:59 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:44.520 04:12:59 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:21:44.520 04:12:59 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:21:44.778 04:12:59 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:21:47.315 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:47.315 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:50.607 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:21:51.985 04:13:06 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.oXJ /tmp/spdk.key-null.j1A /tmp/spdk.key-sha256.4Kx /tmp/spdk.key-sha384.euK /tmp/spdk.key-sha512.6Ll /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:21:51.985 04:13:06 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:21:54.526 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:21:54.526 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:55.908 00:21:55.908 real 0m57.655s 00:21:55.908 user 0m45.713s 00:21:55.908 sys 0m14.206s 00:21:55.908 04:13:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:55.908 04:13:10 -- common/autotest_common.sh@10 -- # set +x 00:21:55.908 ************************************ 00:21:55.908 END TEST nvmf_auth 00:21:55.908 ************************************ 00:21:55.908 04:13:10 -- nvmf/nvmf.sh@104 -- # [[ rdma == \t\c\p ]] 00:21:55.908 04:13:10 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:21:55.908 04:13:10 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:21:55.908 04:13:10 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:21:55.908 04:13:10 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:21:55.908 04:13:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:55.908 04:13:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:55.908 04:13:10 -- common/autotest_common.sh@10 -- # set +x 00:21:55.908 ************************************ 00:21:55.908 START TEST nvmf_bdevperf 00:21:55.908 ************************************ 00:21:55.908 04:13:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:21:55.908 * Looking for test storage... 00:21:55.908 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:55.908 04:13:10 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.908 04:13:10 -- nvmf/common.sh@7 -- # uname -s 00:21:55.908 04:13:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.908 04:13:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.908 04:13:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.908 04:13:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.908 04:13:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.908 04:13:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.908 04:13:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.908 04:13:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.908 04:13:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.908 04:13:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.908 04:13:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:21:55.908 04:13:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:21:55.908 04:13:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.908 04:13:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.908 04:13:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.908 04:13:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.908 04:13:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:55.908 04:13:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.908 04:13:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.908 04:13:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.909 04:13:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.909 04:13:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.909 04:13:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.909 04:13:10 -- paths/export.sh@5 -- # export PATH 00:21:55.909 04:13:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.909 04:13:10 -- nvmf/common.sh@47 -- # : 0 00:21:55.909 04:13:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.909 04:13:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.909 04:13:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.909 04:13:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.909 04:13:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.169 04:13:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.169 04:13:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.169 04:13:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.169 04:13:10 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.169 04:13:10 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.169 04:13:10 -- host/bdevperf.sh@24 -- # nvmftestinit 00:21:56.169 04:13:10 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:56.169 04:13:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.169 04:13:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:56.169 04:13:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:56.169 04:13:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:56.169 04:13:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.169 04:13:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.169 04:13:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.169 04:13:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:56.169 04:13:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:56.169 04:13:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.169 04:13:10 -- common/autotest_common.sh@10 -- # set +x 00:22:01.442 04:13:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:01.442 04:13:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:01.442 04:13:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:01.442 04:13:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:01.442 04:13:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:01.442 04:13:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:01.442 04:13:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:01.442 04:13:15 -- nvmf/common.sh@295 -- # net_devs=() 00:22:01.442 04:13:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:01.442 04:13:15 -- nvmf/common.sh@296 -- # e810=() 00:22:01.442 04:13:15 -- nvmf/common.sh@296 -- # local -ga e810 00:22:01.442 04:13:15 -- nvmf/common.sh@297 -- # x722=() 00:22:01.442 04:13:15 -- nvmf/common.sh@297 -- # local -ga x722 00:22:01.442 04:13:15 -- nvmf/common.sh@298 -- # mlx=() 00:22:01.442 04:13:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:01.442 04:13:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.442 04:13:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.442 04:13:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.442 04:13:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.442 04:13:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.442 04:13:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.442 04:13:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.442 04:13:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.442 04:13:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.442 04:13:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.442 04:13:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.442 04:13:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:01.442 04:13:15 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:01.442 04:13:15 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:01.442 04:13:15 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:01.442 04:13:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:01.442 04:13:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.442 04:13:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:01.442 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:01.442 04:13:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:01.442 04:13:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.442 04:13:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:01.442 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:01.442 04:13:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:01.442 04:13:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:01.442 04:13:15 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:01.442 04:13:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.442 04:13:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.442 04:13:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:01.442 04:13:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.443 04:13:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:01.443 Found net devices under 0000:18:00.0: mlx_0_0 00:22:01.443 04:13:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.443 04:13:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.443 04:13:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.443 04:13:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:01.443 04:13:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.443 04:13:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:01.443 Found net devices under 0000:18:00.1: mlx_0_1 00:22:01.443 04:13:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.443 04:13:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:01.443 04:13:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:01.443 04:13:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@409 -- # rdma_device_init 00:22:01.443 04:13:15 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:22:01.443 04:13:15 -- nvmf/common.sh@58 -- # uname 00:22:01.443 04:13:15 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:01.443 04:13:15 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:01.443 04:13:15 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:01.443 04:13:15 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:01.443 04:13:15 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:01.443 04:13:15 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:01.443 04:13:15 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:01.443 04:13:15 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:01.443 04:13:15 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:22:01.443 04:13:15 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:01.443 04:13:15 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:01.443 04:13:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:01.443 04:13:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:01.443 04:13:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:01.443 04:13:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:01.443 04:13:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:01.443 04:13:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:01.443 04:13:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.443 04:13:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:01.443 04:13:15 -- nvmf/common.sh@105 -- # continue 2 00:22:01.443 04:13:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:01.443 04:13:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.443 04:13:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.443 04:13:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:01.443 04:13:15 -- nvmf/common.sh@105 -- # continue 2 00:22:01.443 04:13:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:01.443 04:13:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:01.443 04:13:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:01.443 04:13:15 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:01.443 04:13:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:01.443 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:01.443 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:22:01.443 altname enp24s0f0np0 00:22:01.443 altname ens785f0np0 00:22:01.443 inet 192.168.100.8/24 scope global mlx_0_0 00:22:01.443 valid_lft forever preferred_lft forever 00:22:01.443 04:13:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:01.443 04:13:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:01.443 04:13:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:01.443 04:13:15 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:01.443 04:13:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:01.443 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:01.443 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:22:01.443 altname enp24s0f1np1 00:22:01.443 altname ens785f1np1 00:22:01.443 inet 192.168.100.9/24 scope global mlx_0_1 00:22:01.443 valid_lft forever preferred_lft forever 00:22:01.443 04:13:15 -- nvmf/common.sh@411 -- # return 0 00:22:01.443 04:13:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:01.443 04:13:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:01.443 04:13:15 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:22:01.443 04:13:15 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:01.443 04:13:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:01.443 04:13:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:01.443 04:13:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:01.443 04:13:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:01.443 04:13:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:01.443 04:13:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:01.443 04:13:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.443 04:13:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:01.443 04:13:15 -- nvmf/common.sh@105 -- # continue 2 00:22:01.443 04:13:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:01.443 04:13:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.443 04:13:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.443 04:13:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:01.443 04:13:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:01.443 04:13:15 -- nvmf/common.sh@105 -- # continue 2 00:22:01.443 04:13:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:01.443 04:13:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:01.443 04:13:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:01.443 04:13:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:01.443 04:13:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:01.443 04:13:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:01.443 04:13:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:01.443 04:13:15 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:22:01.443 192.168.100.9' 00:22:01.443 04:13:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:01.443 192.168.100.9' 00:22:01.443 04:13:15 -- nvmf/common.sh@446 -- # head -n 1 00:22:01.443 04:13:15 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:01.443 04:13:15 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:22:01.443 192.168.100.9' 00:22:01.443 04:13:15 -- nvmf/common.sh@447 -- # tail -n +2 00:22:01.443 04:13:15 -- nvmf/common.sh@447 -- # head -n 1 00:22:01.443 04:13:15 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:01.443 04:13:15 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:22:01.443 04:13:15 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:01.443 04:13:15 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:22:01.443 04:13:15 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:22:01.443 04:13:15 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:22:01.443 04:13:15 -- host/bdevperf.sh@25 -- # tgt_init 00:22:01.443 04:13:15 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:22:01.443 04:13:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:01.443 04:13:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:01.443 04:13:15 -- common/autotest_common.sh@10 -- # set +x 00:22:01.443 04:13:15 -- nvmf/common.sh@470 -- # nvmfpid=406978 00:22:01.443 04:13:15 -- nvmf/common.sh@471 -- # waitforlisten 406978 00:22:01.443 04:13:15 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:01.443 04:13:15 -- common/autotest_common.sh@817 -- # '[' -z 406978 ']' 00:22:01.443 04:13:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.443 04:13:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:01.443 04:13:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.443 04:13:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:01.443 04:13:15 -- common/autotest_common.sh@10 -- # set +x 00:22:01.443 [2024-04-19 04:13:15.955962] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:01.443 [2024-04-19 04:13:15.956006] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.702 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.702 [2024-04-19 04:13:16.007326] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:01.702 [2024-04-19 04:13:16.082339] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.702 [2024-04-19 04:13:16.082372] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.702 [2024-04-19 04:13:16.082379] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.702 [2024-04-19 04:13:16.082384] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.702 [2024-04-19 04:13:16.082388] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.702 [2024-04-19 04:13:16.082425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.702 [2024-04-19 04:13:16.082508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.702 [2024-04-19 04:13:16.082509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.269 04:13:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:02.269 04:13:16 -- common/autotest_common.sh@850 -- # return 0 00:22:02.269 04:13:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:02.269 04:13:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:02.269 04:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:02.269 04:13:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.269 04:13:16 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:02.269 04:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.269 04:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:02.528 [2024-04-19 04:13:16.804076] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22d4ee0/0x22d93d0) succeed. 00:22:02.528 [2024-04-19 04:13:16.813134] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22d6430/0x231aa60) succeed. 00:22:02.528 04:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.528 04:13:16 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:02.528 04:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.528 04:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:02.528 Malloc0 00:22:02.528 04:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.528 04:13:16 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:02.528 04:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.528 04:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:02.528 04:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.528 04:13:16 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:02.528 04:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.528 04:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:02.528 04:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.528 04:13:16 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:02.528 04:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.528 04:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:02.528 [2024-04-19 04:13:16.952329] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:02.528 04:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.528 04:13:16 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:22:02.528 04:13:16 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:22:02.528 04:13:16 -- nvmf/common.sh@521 -- # config=() 00:22:02.528 04:13:16 -- nvmf/common.sh@521 -- # local subsystem config 00:22:02.528 04:13:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:02.528 04:13:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:02.528 { 00:22:02.528 "params": { 00:22:02.528 "name": "Nvme$subsystem", 00:22:02.528 "trtype": "$TEST_TRANSPORT", 00:22:02.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.528 "adrfam": "ipv4", 00:22:02.528 "trsvcid": "$NVMF_PORT", 00:22:02.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.528 "hdgst": ${hdgst:-false}, 00:22:02.528 "ddgst": ${ddgst:-false} 00:22:02.528 }, 00:22:02.528 "method": "bdev_nvme_attach_controller" 00:22:02.528 } 00:22:02.528 EOF 00:22:02.528 )") 00:22:02.528 04:13:16 -- nvmf/common.sh@543 -- # cat 00:22:02.528 04:13:16 -- nvmf/common.sh@545 -- # jq . 00:22:02.528 04:13:16 -- nvmf/common.sh@546 -- # IFS=, 00:22:02.528 04:13:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:02.528 "params": { 00:22:02.528 "name": "Nvme1", 00:22:02.528 "trtype": "rdma", 00:22:02.528 "traddr": "192.168.100.8", 00:22:02.528 "adrfam": "ipv4", 00:22:02.528 "trsvcid": "4420", 00:22:02.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.528 "hdgst": false, 00:22:02.528 "ddgst": false 00:22:02.528 }, 00:22:02.528 "method": "bdev_nvme_attach_controller" 00:22:02.528 }' 00:22:02.528 [2024-04-19 04:13:16.999515] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:02.528 [2024-04-19 04:13:16.999559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407031 ] 00:22:02.528 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.528 [2024-04-19 04:13:17.051054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.788 [2024-04-19 04:13:17.119982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.788 Running I/O for 1 seconds... 00:22:04.165 00:22:04.165 Latency(us) 00:22:04.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.165 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:04.165 Verification LBA range: start 0x0 length 0x4000 00:22:04.165 Nvme1n1 : 1.01 19713.24 77.00 0.00 0.00 6459.07 2318.03 11845.03 00:22:04.165 =================================================================================================================== 00:22:04.165 Total : 19713.24 77.00 0.00 0.00 6459.07 2318.03 11845.03 00:22:04.165 04:13:18 -- host/bdevperf.sh@30 -- # bdevperfpid=407337 00:22:04.165 04:13:18 -- host/bdevperf.sh@32 -- # sleep 3 00:22:04.165 04:13:18 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:22:04.165 04:13:18 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:22:04.165 04:13:18 -- nvmf/common.sh@521 -- # config=() 00:22:04.165 04:13:18 -- nvmf/common.sh@521 -- # local subsystem config 00:22:04.165 04:13:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:04.165 04:13:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:04.165 { 00:22:04.165 "params": { 00:22:04.165 "name": "Nvme$subsystem", 00:22:04.165 "trtype": "$TEST_TRANSPORT", 00:22:04.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.165 "adrfam": "ipv4", 00:22:04.165 "trsvcid": "$NVMF_PORT", 00:22:04.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.165 "hdgst": ${hdgst:-false}, 00:22:04.165 "ddgst": ${ddgst:-false} 00:22:04.165 }, 00:22:04.165 "method": "bdev_nvme_attach_controller" 00:22:04.165 } 00:22:04.165 EOF 00:22:04.165 )") 00:22:04.165 04:13:18 -- nvmf/common.sh@543 -- # cat 00:22:04.165 04:13:18 -- nvmf/common.sh@545 -- # jq . 00:22:04.165 04:13:18 -- nvmf/common.sh@546 -- # IFS=, 00:22:04.165 04:13:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:04.165 "params": { 00:22:04.165 "name": "Nvme1", 00:22:04.165 "trtype": "rdma", 00:22:04.165 "traddr": "192.168.100.8", 00:22:04.165 "adrfam": "ipv4", 00:22:04.165 "trsvcid": "4420", 00:22:04.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.165 "hdgst": false, 00:22:04.165 "ddgst": false 00:22:04.165 }, 00:22:04.165 "method": "bdev_nvme_attach_controller" 00:22:04.165 }' 00:22:04.165 [2024-04-19 04:13:18.563674] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:04.165 [2024-04-19 04:13:18.563719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407337 ] 00:22:04.165 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.165 [2024-04-19 04:13:18.612559] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.165 [2024-04-19 04:13:18.678681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.424 Running I/O for 15 seconds... 00:22:07.710 04:13:21 -- host/bdevperf.sh@33 -- # kill -9 406978 00:22:07.710 04:13:21 -- host/bdevperf.sh@35 -- # sleep 3 00:22:08.279 [2024-04-19 04:13:22.544706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.544991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.544997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.279 [2024-04-19 04:13:22.545201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.279 [2024-04-19 04:13:22.545208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.280 [2024-04-19 04:13:22.545690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.280 [2024-04-19 04:13:22.545695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.545988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.545997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.281 [2024-04-19 04:13:22.546180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.281 [2024-04-19 04:13:22.546187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.282 [2024-04-19 04:13:22.546193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.282 [2024-04-19 04:13:22.546207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.546369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x186f00 00:22:08.282 [2024-04-19 04:13:22.546375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.548273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.282 [2024-04-19 04:13:22.548284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.282 [2024-04-19 04:13:22.548289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32872 len:8 PRP1 0x0 PRP2 0x0 00:22:08.282 [2024-04-19 04:13:22.548296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.282 [2024-04-19 04:13:22.548329] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a00 was disconnected and freed. reset controller. 00:22:08.282 [2024-04-19 04:13:22.550840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:08.282 [2024-04-19 04:13:22.564110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:08.282 [2024-04-19 04:13:22.567481] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:08.282 [2024-04-19 04:13:22.567502] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:08.282 [2024-04-19 04:13:22.567510] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:22:09.214 [2024-04-19 04:13:23.571445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:09.214 [2024-04-19 04:13:23.571495] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:09.214 [2024-04-19 04:13:23.571746] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:09.214 [2024-04-19 04:13:23.571754] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:09.214 [2024-04-19 04:13:23.571761] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:09.214 [2024-04-19 04:13:23.573682] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:09.214 [2024-04-19 04:13:23.574257] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:09.214 [2024-04-19 04:13:23.586270] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:09.214 [2024-04-19 04:13:23.588847] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:09.214 [2024-04-19 04:13:23.588864] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:09.214 [2024-04-19 04:13:23.588870] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:22:10.149 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 406978 Killed "${NVMF_APP[@]}" "$@" 00:22:10.149 04:13:24 -- host/bdevperf.sh@36 -- # tgt_init 00:22:10.149 04:13:24 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:22:10.149 04:13:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:10.149 04:13:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:10.149 04:13:24 -- common/autotest_common.sh@10 -- # set +x 00:22:10.149 04:13:24 -- nvmf/common.sh@470 -- # nvmfpid=408444 00:22:10.149 04:13:24 -- nvmf/common.sh@471 -- # waitforlisten 408444 00:22:10.149 04:13:24 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:10.149 04:13:24 -- common/autotest_common.sh@817 -- # '[' -z 408444 ']' 00:22:10.149 04:13:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.149 04:13:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:10.149 04:13:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.149 04:13:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:10.149 04:13:24 -- common/autotest_common.sh@10 -- # set +x 00:22:10.149 [2024-04-19 04:13:24.582893] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:10.149 [2024-04-19 04:13:24.582928] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.149 [2024-04-19 04:13:24.592759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:10.149 [2024-04-19 04:13:24.592777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:10.149 [2024-04-19 04:13:24.592937] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:10.149 [2024-04-19 04:13:24.592945] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:10.149 [2024-04-19 04:13:24.592953] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:10.149 [2024-04-19 04:13:24.595474] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.149 [2024-04-19 04:13:24.598294] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:10.149 [2024-04-19 04:13:24.600592] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:10.149 [2024-04-19 04:13:24.600610] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:10.149 [2024-04-19 04:13:24.600616] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:22:10.149 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.149 [2024-04-19 04:13:24.633573] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:10.408 [2024-04-19 04:13:24.707062] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.409 [2024-04-19 04:13:24.707096] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.409 [2024-04-19 04:13:24.707102] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.409 [2024-04-19 04:13:24.707108] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.409 [2024-04-19 04:13:24.707112] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.409 [2024-04-19 04:13:24.707144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.409 [2024-04-19 04:13:24.707226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.409 [2024-04-19 04:13:24.707227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.975 04:13:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:10.975 04:13:25 -- common/autotest_common.sh@850 -- # return 0 00:22:10.975 04:13:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:10.975 04:13:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:10.975 04:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:10.975 04:13:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.975 04:13:25 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:10.975 04:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.975 04:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:10.975 [2024-04-19 04:13:25.429487] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e39ee0/0x1e3e3d0) succeed. 00:22:10.975 [2024-04-19 04:13:25.438676] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e3b430/0x1e7fa60) succeed. 00:22:11.234 04:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.234 04:13:25 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:11.234 04:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:11.234 04:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:11.234 Malloc0 00:22:11.234 04:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.234 04:13:25 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:11.234 04:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:11.234 04:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:11.234 04:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.234 04:13:25 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:11.234 04:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:11.234 04:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:11.234 04:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.234 04:13:25 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:11.234 04:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:11.234 04:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:11.234 [2024-04-19 04:13:25.582717] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:11.234 04:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.234 04:13:25 -- host/bdevperf.sh@38 -- # wait 407337 00:22:11.234 [2024-04-19 04:13:25.604579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:11.234 [2024-04-19 04:13:25.604601] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:11.234 [2024-04-19 04:13:25.604761] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:11.234 [2024-04-19 04:13:25.604769] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:11.234 [2024-04-19 04:13:25.604775] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:11.234 [2024-04-19 04:13:25.605380] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:11.234 [2024-04-19 04:13:25.607302] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.234 [2024-04-19 04:13:25.618121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:11.234 [2024-04-19 04:13:25.666388] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:19.350 00:22:19.350 Latency(us) 00:22:19.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.350 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.350 Verification LBA range: start 0x0 length 0x4000 00:22:19.350 Nvme1n1 : 15.01 14146.59 55.26 11354.66 0.00 5002.00 421.74 1025274.31 00:22:19.350 =================================================================================================================== 00:22:19.350 Total : 14146.59 55.26 11354.66 0.00 5002.00 421.74 1025274.31 00:22:19.608 04:13:34 -- host/bdevperf.sh@39 -- # sync 00:22:19.608 04:13:34 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:19.608 04:13:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.608 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:19.608 04:13:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.608 04:13:34 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:22:19.608 04:13:34 -- host/bdevperf.sh@44 -- # nvmftestfini 00:22:19.608 04:13:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:19.608 04:13:34 -- nvmf/common.sh@117 -- # sync 00:22:19.608 04:13:34 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:19.608 04:13:34 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:19.608 04:13:34 -- nvmf/common.sh@120 -- # set +e 00:22:19.608 04:13:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:19.608 04:13:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:19.608 rmmod nvme_rdma 00:22:19.608 rmmod nvme_fabrics 00:22:19.866 04:13:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:19.866 04:13:34 -- nvmf/common.sh@124 -- # set -e 00:22:19.866 04:13:34 -- nvmf/common.sh@125 -- # return 0 00:22:19.866 04:13:34 -- nvmf/common.sh@478 -- # '[' -n 408444 ']' 00:22:19.866 04:13:34 -- nvmf/common.sh@479 -- # killprocess 408444 00:22:19.866 04:13:34 -- common/autotest_common.sh@936 -- # '[' -z 408444 ']' 00:22:19.866 04:13:34 -- common/autotest_common.sh@940 -- # kill -0 408444 00:22:19.866 04:13:34 -- common/autotest_common.sh@941 -- # uname 00:22:19.866 04:13:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:19.866 04:13:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 408444 00:22:19.866 04:13:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:19.866 04:13:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:19.866 04:13:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 408444' 00:22:19.866 killing process with pid 408444 00:22:19.866 04:13:34 -- common/autotest_common.sh@955 -- # kill 408444 00:22:19.866 04:13:34 -- common/autotest_common.sh@960 -- # wait 408444 00:22:20.125 04:13:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:20.125 04:13:34 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:22:20.125 00:22:20.125 real 0m24.147s 00:22:20.125 user 1m4.060s 00:22:20.125 sys 0m5.120s 00:22:20.125 04:13:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:20.125 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:20.125 ************************************ 00:22:20.125 END TEST nvmf_bdevperf 00:22:20.125 ************************************ 00:22:20.125 04:13:34 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:22:20.125 04:13:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:20.125 04:13:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:20.125 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:20.125 ************************************ 00:22:20.125 START TEST nvmf_target_disconnect 00:22:20.125 ************************************ 00:22:20.125 04:13:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:22:20.385 * Looking for test storage... 00:22:20.385 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:20.385 04:13:34 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.385 04:13:34 -- nvmf/common.sh@7 -- # uname -s 00:22:20.385 04:13:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.385 04:13:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.385 04:13:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.385 04:13:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.385 04:13:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.385 04:13:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.385 04:13:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.385 04:13:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.385 04:13:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.385 04:13:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.385 04:13:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:22:20.385 04:13:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:22:20.385 04:13:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.385 04:13:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.385 04:13:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.385 04:13:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.385 04:13:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:20.385 04:13:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.385 04:13:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.385 04:13:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.385 04:13:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.385 04:13:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.385 04:13:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.385 04:13:34 -- paths/export.sh@5 -- # export PATH 00:22:20.385 04:13:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.385 04:13:34 -- nvmf/common.sh@47 -- # : 0 00:22:20.385 04:13:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.385 04:13:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.385 04:13:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.385 04:13:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.385 04:13:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.385 04:13:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.385 04:13:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.385 04:13:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.385 04:13:34 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:22:20.385 04:13:34 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:22:20.385 04:13:34 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:22:20.385 04:13:34 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:22:20.385 04:13:34 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:22:20.385 04:13:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.385 04:13:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:20.385 04:13:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:20.385 04:13:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:20.385 04:13:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.385 04:13:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.385 04:13:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.385 04:13:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:20.385 04:13:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:20.385 04:13:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:20.385 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:25.658 04:13:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:25.658 04:13:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:25.658 04:13:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:25.658 04:13:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:25.658 04:13:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:25.658 04:13:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:25.658 04:13:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:25.658 04:13:40 -- nvmf/common.sh@295 -- # net_devs=() 00:22:25.658 04:13:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:25.658 04:13:40 -- nvmf/common.sh@296 -- # e810=() 00:22:25.658 04:13:40 -- nvmf/common.sh@296 -- # local -ga e810 00:22:25.658 04:13:40 -- nvmf/common.sh@297 -- # x722=() 00:22:25.658 04:13:40 -- nvmf/common.sh@297 -- # local -ga x722 00:22:25.658 04:13:40 -- nvmf/common.sh@298 -- # mlx=() 00:22:25.658 04:13:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:25.658 04:13:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.658 04:13:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.658 04:13:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.658 04:13:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.658 04:13:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.658 04:13:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.658 04:13:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.658 04:13:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.658 04:13:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.658 04:13:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.658 04:13:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.658 04:13:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:25.658 04:13:40 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:25.658 04:13:40 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:25.658 04:13:40 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:25.658 04:13:40 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:25.658 04:13:40 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:25.658 04:13:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:25.658 04:13:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.658 04:13:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:25.658 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:25.658 04:13:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:25.658 04:13:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:25.658 04:13:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:25.658 04:13:40 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:25.658 04:13:40 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:25.658 04:13:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:25.658 04:13:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.658 04:13:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:25.658 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:25.658 04:13:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:25.918 04:13:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:25.918 04:13:40 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.918 04:13:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.918 04:13:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:25.918 04:13:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.918 04:13:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:25.918 Found net devices under 0000:18:00.0: mlx_0_0 00:22:25.918 04:13:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.918 04:13:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.918 04:13:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.918 04:13:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:25.918 04:13:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.918 04:13:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:25.918 Found net devices under 0000:18:00.1: mlx_0_1 00:22:25.918 04:13:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.918 04:13:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:25.918 04:13:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:25.918 04:13:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@409 -- # rdma_device_init 00:22:25.918 04:13:40 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:22:25.918 04:13:40 -- nvmf/common.sh@58 -- # uname 00:22:25.918 04:13:40 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:25.918 04:13:40 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:25.918 04:13:40 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:25.918 04:13:40 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:25.918 04:13:40 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:25.918 04:13:40 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:25.918 04:13:40 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:25.918 04:13:40 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:25.918 04:13:40 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:22:25.918 04:13:40 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:25.918 04:13:40 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:25.918 04:13:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:25.918 04:13:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:25.918 04:13:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:25.918 04:13:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:25.918 04:13:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:25.918 04:13:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:25.918 04:13:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:25.918 04:13:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:25.918 04:13:40 -- nvmf/common.sh@105 -- # continue 2 00:22:25.918 04:13:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:25.918 04:13:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:25.918 04:13:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:25.918 04:13:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:25.918 04:13:40 -- nvmf/common.sh@105 -- # continue 2 00:22:25.918 04:13:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:25.918 04:13:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:25.918 04:13:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:25.918 04:13:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:25.918 04:13:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:25.918 04:13:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:25.918 04:13:40 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:25.918 04:13:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:25.918 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:25.918 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:22:25.918 altname enp24s0f0np0 00:22:25.918 altname ens785f0np0 00:22:25.918 inet 192.168.100.8/24 scope global mlx_0_0 00:22:25.918 valid_lft forever preferred_lft forever 00:22:25.918 04:13:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:25.918 04:13:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:25.918 04:13:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:25.918 04:13:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:25.918 04:13:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:25.918 04:13:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:25.918 04:13:40 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:25.918 04:13:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:25.918 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:25.918 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:22:25.918 altname enp24s0f1np1 00:22:25.918 altname ens785f1np1 00:22:25.918 inet 192.168.100.9/24 scope global mlx_0_1 00:22:25.918 valid_lft forever preferred_lft forever 00:22:25.918 04:13:40 -- nvmf/common.sh@411 -- # return 0 00:22:25.918 04:13:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:25.918 04:13:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:25.918 04:13:40 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:22:25.918 04:13:40 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:25.918 04:13:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:25.918 04:13:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:25.918 04:13:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:25.918 04:13:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:25.918 04:13:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:25.918 04:13:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:25.918 04:13:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:25.918 04:13:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:25.918 04:13:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:25.918 04:13:40 -- nvmf/common.sh@105 -- # continue 2 00:22:25.918 04:13:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:25.918 04:13:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:25.919 04:13:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:25.919 04:13:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:25.919 04:13:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:25.919 04:13:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:25.919 04:13:40 -- nvmf/common.sh@105 -- # continue 2 00:22:25.919 04:13:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:25.919 04:13:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:25.919 04:13:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:25.919 04:13:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:25.919 04:13:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:25.919 04:13:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:25.919 04:13:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:25.919 04:13:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:25.919 04:13:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:25.919 04:13:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:25.919 04:13:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:25.919 04:13:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:25.919 04:13:40 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:22:25.919 192.168.100.9' 00:22:25.919 04:13:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:25.919 192.168.100.9' 00:22:25.919 04:13:40 -- nvmf/common.sh@446 -- # head -n 1 00:22:25.919 04:13:40 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:25.919 04:13:40 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:22:25.919 192.168.100.9' 00:22:25.919 04:13:40 -- nvmf/common.sh@447 -- # tail -n +2 00:22:25.919 04:13:40 -- nvmf/common.sh@447 -- # head -n 1 00:22:25.919 04:13:40 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:25.919 04:13:40 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:22:25.919 04:13:40 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:25.919 04:13:40 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:22:25.919 04:13:40 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:22:25.919 04:13:40 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:22:25.919 04:13:40 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:22:25.919 04:13:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:25.919 04:13:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:25.919 04:13:40 -- common/autotest_common.sh@10 -- # set +x 00:22:26.178 ************************************ 00:22:26.178 START TEST nvmf_target_disconnect_tc1 00:22:26.178 ************************************ 00:22:26.178 04:13:40 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:22:26.178 04:13:40 -- host/target_disconnect.sh@32 -- # set +e 00:22:26.178 04:13:40 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:26.178 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.178 [2024-04-19 04:13:40.606337] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:26.178 [2024-04-19 04:13:40.606377] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:26.178 [2024-04-19 04:13:40.606386] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7080 00:22:27.111 [2024-04-19 04:13:41.610311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:27.111 [2024-04-19 04:13:41.610369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:27.111 [2024-04-19 04:13:41.610377] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:22:27.111 [2024-04-19 04:13:41.610419] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:27.111 [2024-04-19 04:13:41.610426] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:22:27.111 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:22:27.111 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:22:27.111 Initializing NVMe Controllers 00:22:27.111 04:13:41 -- host/target_disconnect.sh@33 -- # trap - ERR 00:22:27.111 04:13:41 -- host/target_disconnect.sh@33 -- # print_backtrace 00:22:27.111 04:13:41 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:22:27.111 04:13:41 -- common/autotest_common.sh@1139 -- # return 0 00:22:27.111 04:13:41 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:22:27.111 04:13:41 -- host/target_disconnect.sh@41 -- # set -e 00:22:27.111 00:22:27.111 real 0m1.101s 00:22:27.111 user 0m0.940s 00:22:27.111 sys 0m0.151s 00:22:27.111 04:13:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:27.111 04:13:41 -- common/autotest_common.sh@10 -- # set +x 00:22:27.111 ************************************ 00:22:27.111 END TEST nvmf_target_disconnect_tc1 00:22:27.111 ************************************ 00:22:27.370 04:13:41 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:22:27.370 04:13:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:27.370 04:13:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:27.370 04:13:41 -- common/autotest_common.sh@10 -- # set +x 00:22:27.370 ************************************ 00:22:27.370 START TEST nvmf_target_disconnect_tc2 00:22:27.370 ************************************ 00:22:27.370 04:13:41 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:22:27.370 04:13:41 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:22:27.370 04:13:41 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:27.370 04:13:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:27.370 04:13:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:27.370 04:13:41 -- common/autotest_common.sh@10 -- # set +x 00:22:27.370 04:13:41 -- nvmf/common.sh@470 -- # nvmfpid=413727 00:22:27.370 04:13:41 -- nvmf/common.sh@471 -- # waitforlisten 413727 00:22:27.370 04:13:41 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:27.370 04:13:41 -- common/autotest_common.sh@817 -- # '[' -z 413727 ']' 00:22:27.370 04:13:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.370 04:13:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:27.370 04:13:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.371 04:13:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:27.371 04:13:41 -- common/autotest_common.sh@10 -- # set +x 00:22:27.371 [2024-04-19 04:13:41.836931] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:27.371 [2024-04-19 04:13:41.836971] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.371 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.629 [2024-04-19 04:13:41.905321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.629 [2024-04-19 04:13:41.972468] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.629 [2024-04-19 04:13:41.972508] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.629 [2024-04-19 04:13:41.972514] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.629 [2024-04-19 04:13:41.972519] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.630 [2024-04-19 04:13:41.972523] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.630 [2024-04-19 04:13:41.972649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:27.630 [2024-04-19 04:13:41.972764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:27.630 [2024-04-19 04:13:41.972868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:27.630 [2024-04-19 04:13:41.972870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:28.196 04:13:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:28.196 04:13:42 -- common/autotest_common.sh@850 -- # return 0 00:22:28.196 04:13:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:28.196 04:13:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:28.196 04:13:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.196 04:13:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.196 04:13:42 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:28.196 04:13:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.196 04:13:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.196 Malloc0 00:22:28.196 04:13:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.196 04:13:42 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:28.196 04:13:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.196 04:13:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.196 [2024-04-19 04:13:42.702116] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ef9770/0x1f05380) succeed. 00:22:28.196 [2024-04-19 04:13:42.711591] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1efad60/0x1f85400) succeed. 00:22:28.455 04:13:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.455 04:13:42 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.455 04:13:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.455 04:13:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.455 04:13:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.455 04:13:42 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:28.455 04:13:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.455 04:13:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.455 04:13:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.455 04:13:42 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:28.455 04:13:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.455 04:13:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.455 [2024-04-19 04:13:42.842909] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:28.455 04:13:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.455 04:13:42 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:28.455 04:13:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.455 04:13:42 -- common/autotest_common.sh@10 -- # set +x 00:22:28.455 04:13:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.455 04:13:42 -- host/target_disconnect.sh@50 -- # reconnectpid=414006 00:22:28.455 04:13:42 -- host/target_disconnect.sh@52 -- # sleep 2 00:22:28.455 04:13:42 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:28.455 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.356 04:13:44 -- host/target_disconnect.sh@53 -- # kill -9 413727 00:22:30.356 04:13:44 -- host/target_disconnect.sh@55 -- # sleep 2 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Read completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 Write completed with error (sct=0, sc=8) 00:22:31.730 starting I/O failed 00:22:31.730 [2024-04-19 04:13:46.006910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:32.665 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 413727 Killed "${NVMF_APP[@]}" "$@" 00:22:32.665 04:13:46 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:22:32.665 04:13:46 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:32.665 04:13:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:32.665 04:13:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:32.665 04:13:46 -- common/autotest_common.sh@10 -- # set +x 00:22:32.665 04:13:46 -- nvmf/common.sh@470 -- # nvmfpid=414709 00:22:32.665 04:13:46 -- nvmf/common.sh@471 -- # waitforlisten 414709 00:22:32.665 04:13:46 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:32.665 04:13:46 -- common/autotest_common.sh@817 -- # '[' -z 414709 ']' 00:22:32.665 04:13:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.665 04:13:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:32.665 04:13:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.665 04:13:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:32.665 04:13:46 -- common/autotest_common.sh@10 -- # set +x 00:22:32.665 [2024-04-19 04:13:46.913141] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:32.665 [2024-04-19 04:13:46.913185] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.665 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.665 [2024-04-19 04:13:46.980272] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.665 Read completed with error (sct=0, sc=8) 00:22:32.665 starting I/O failed 00:22:32.665 Read completed with error (sct=0, sc=8) 00:22:32.665 starting I/O failed 00:22:32.665 Read completed with error (sct=0, sc=8) 00:22:32.665 starting I/O failed 00:22:32.665 Write completed with error (sct=0, sc=8) 00:22:32.665 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Read completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 Write completed with error (sct=0, sc=8) 00:22:32.666 starting I/O failed 00:22:32.666 [2024-04-19 04:13:47.011937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:32.666 [2024-04-19 04:13:47.050797] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.666 [2024-04-19 04:13:47.050832] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.666 [2024-04-19 04:13:47.050838] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.666 [2024-04-19 04:13:47.050843] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.666 [2024-04-19 04:13:47.050848] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.666 [2024-04-19 04:13:47.050968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:32.666 [2024-04-19 04:13:47.051085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:32.666 [2024-04-19 04:13:47.051190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:32.666 [2024-04-19 04:13:47.051192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:33.232 04:13:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:33.232 04:13:47 -- common/autotest_common.sh@850 -- # return 0 00:22:33.232 04:13:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:33.232 04:13:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:33.232 04:13:47 -- common/autotest_common.sh@10 -- # set +x 00:22:33.232 04:13:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.232 04:13:47 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:33.232 04:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.232 04:13:47 -- common/autotest_common.sh@10 -- # set +x 00:22:33.232 Malloc0 00:22:33.232 04:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.232 04:13:47 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:33.232 04:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.232 04:13:47 -- common/autotest_common.sh@10 -- # set +x 00:22:33.490 [2024-04-19 04:13:47.780146] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ef1770/0x1efd380) succeed. 00:22:33.490 [2024-04-19 04:13:47.789543] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ef2d60/0x1f7d400) succeed. 00:22:33.490 04:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.490 04:13:47 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:33.490 04:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.490 04:13:47 -- common/autotest_common.sh@10 -- # set +x 00:22:33.490 04:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.490 04:13:47 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:33.490 04:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.490 04:13:47 -- common/autotest_common.sh@10 -- # set +x 00:22:33.491 04:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.491 04:13:47 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:33.491 04:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.491 04:13:47 -- common/autotest_common.sh@10 -- # set +x 00:22:33.491 [2024-04-19 04:13:47.922231] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:33.491 04:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.491 04:13:47 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:33.491 04:13:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.491 04:13:47 -- common/autotest_common.sh@10 -- # set +x 00:22:33.491 04:13:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.491 04:13:47 -- host/target_disconnect.sh@58 -- # wait 414006 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Read completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 Write completed with error (sct=0, sc=8) 00:22:33.491 starting I/O failed 00:22:33.491 [2024-04-19 04:13:48.016938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.749 [2024-04-19 04:13:48.030210] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.749 [2024-04-19 04:13:48.030257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.749 [2024-04-19 04:13:48.030279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.749 [2024-04-19 04:13:48.030286] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.749 [2024-04-19 04:13:48.030292] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.749 [2024-04-19 04:13:48.040554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.749 qpair failed and we were unable to recover it. 00:22:33.749 [2024-04-19 04:13:48.050234] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.749 [2024-04-19 04:13:48.050278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.749 [2024-04-19 04:13:48.050293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.749 [2024-04-19 04:13:48.050299] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.749 [2024-04-19 04:13:48.050304] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.749 [2024-04-19 04:13:48.060675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.749 qpair failed and we were unable to recover it. 00:22:33.749 [2024-04-19 04:13:48.070251] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.749 [2024-04-19 04:13:48.070286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.749 [2024-04-19 04:13:48.070302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.749 [2024-04-19 04:13:48.070309] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.749 [2024-04-19 04:13:48.070315] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.749 [2024-04-19 04:13:48.080605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.749 qpair failed and we were unable to recover it. 00:22:33.749 [2024-04-19 04:13:48.090389] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.749 [2024-04-19 04:13:48.090430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.749 [2024-04-19 04:13:48.090444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.749 [2024-04-19 04:13:48.090450] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.749 [2024-04-19 04:13:48.090455] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.749 [2024-04-19 04:13:48.100570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.749 qpair failed and we were unable to recover it. 00:22:33.749 [2024-04-19 04:13:48.110246] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.749 [2024-04-19 04:13:48.110284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.749 [2024-04-19 04:13:48.110299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.749 [2024-04-19 04:13:48.110308] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.749 [2024-04-19 04:13:48.110313] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.749 [2024-04-19 04:13:48.120714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.749 qpair failed and we were unable to recover it. 00:22:33.749 [2024-04-19 04:13:48.130492] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.749 [2024-04-19 04:13:48.130531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.749 [2024-04-19 04:13:48.130545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.749 [2024-04-19 04:13:48.130552] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.749 [2024-04-19 04:13:48.130557] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.749 [2024-04-19 04:13:48.140762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.749 qpair failed and we were unable to recover it. 00:22:33.749 [2024-04-19 04:13:48.150442] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.749 [2024-04-19 04:13:48.150473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.749 [2024-04-19 04:13:48.150487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.749 [2024-04-19 04:13:48.150493] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.749 [2024-04-19 04:13:48.150498] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.749 [2024-04-19 04:13:48.160866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.749 qpair failed and we were unable to recover it. 00:22:33.749 [2024-04-19 04:13:48.170604] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.750 [2024-04-19 04:13:48.170641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.750 [2024-04-19 04:13:48.170655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.750 [2024-04-19 04:13:48.170661] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.750 [2024-04-19 04:13:48.170667] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.750 [2024-04-19 04:13:48.180916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.750 qpair failed and we were unable to recover it. 00:22:33.750 [2024-04-19 04:13:48.190602] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.750 [2024-04-19 04:13:48.190641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.750 [2024-04-19 04:13:48.190655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.750 [2024-04-19 04:13:48.190662] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.750 [2024-04-19 04:13:48.190668] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.750 [2024-04-19 04:13:48.200996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.750 qpair failed and we were unable to recover it. 00:22:33.750 [2024-04-19 04:13:48.210675] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.750 [2024-04-19 04:13:48.210710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.750 [2024-04-19 04:13:48.210723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.750 [2024-04-19 04:13:48.210730] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.750 [2024-04-19 04:13:48.210735] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.750 [2024-04-19 04:13:48.221085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.750 qpair failed and we were unable to recover it. 00:22:33.750 [2024-04-19 04:13:48.230672] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.750 [2024-04-19 04:13:48.230708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.750 [2024-04-19 04:13:48.230722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.750 [2024-04-19 04:13:48.230728] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.750 [2024-04-19 04:13:48.230734] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.750 [2024-04-19 04:13:48.241189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.750 qpair failed and we were unable to recover it. 00:22:33.750 [2024-04-19 04:13:48.250761] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.750 [2024-04-19 04:13:48.250796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.750 [2024-04-19 04:13:48.250810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.750 [2024-04-19 04:13:48.250816] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.750 [2024-04-19 04:13:48.250821] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:33.750 [2024-04-19 04:13:48.261266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.750 qpair failed and we were unable to recover it. 00:22:33.750 [2024-04-19 04:13:48.270921] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:33.750 [2024-04-19 04:13:48.270956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:33.750 [2024-04-19 04:13:48.270970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:33.750 [2024-04-19 04:13:48.270976] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:33.750 [2024-04-19 04:13:48.270981] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.007 [2024-04-19 04:13:48.281281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.007 qpair failed and we were unable to recover it. 00:22:34.007 [2024-04-19 04:13:48.290941] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.007 [2024-04-19 04:13:48.290978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.007 [2024-04-19 04:13:48.290997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.007 [2024-04-19 04:13:48.291004] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.007 [2024-04-19 04:13:48.291009] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.007 [2024-04-19 04:13:48.301217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.007 qpair failed and we were unable to recover it. 00:22:34.007 [2024-04-19 04:13:48.310981] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.007 [2024-04-19 04:13:48.311014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.007 [2024-04-19 04:13:48.311028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.007 [2024-04-19 04:13:48.311034] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.007 [2024-04-19 04:13:48.311039] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.007 [2024-04-19 04:13:48.321418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.007 qpair failed and we were unable to recover it. 00:22:34.007 [2024-04-19 04:13:48.330951] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.007 [2024-04-19 04:13:48.330988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.007 [2024-04-19 04:13:48.331004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.007 [2024-04-19 04:13:48.331010] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.007 [2024-04-19 04:13:48.331015] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.008 [2024-04-19 04:13:48.341377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.008 qpair failed and we were unable to recover it. 00:22:34.008 [2024-04-19 04:13:48.351078] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.008 [2024-04-19 04:13:48.351112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.008 [2024-04-19 04:13:48.351128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.008 [2024-04-19 04:13:48.351134] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.008 [2024-04-19 04:13:48.351139] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.008 [2024-04-19 04:13:48.361272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.008 qpair failed and we were unable to recover it. 00:22:34.008 [2024-04-19 04:13:48.371205] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.008 [2024-04-19 04:13:48.371239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.008 [2024-04-19 04:13:48.371253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.008 [2024-04-19 04:13:48.371259] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.008 [2024-04-19 04:13:48.371268] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.008 [2024-04-19 04:13:48.381604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.008 qpair failed and we were unable to recover it. 00:22:34.008 [2024-04-19 04:13:48.391107] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.008 [2024-04-19 04:13:48.391142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.008 [2024-04-19 04:13:48.391156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.008 [2024-04-19 04:13:48.391162] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.008 [2024-04-19 04:13:48.391167] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.008 [2024-04-19 04:13:48.401569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.008 qpair failed and we were unable to recover it. 00:22:34.008 [2024-04-19 04:13:48.411264] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.008 [2024-04-19 04:13:48.411303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.008 [2024-04-19 04:13:48.411316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.008 [2024-04-19 04:13:48.411322] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.008 [2024-04-19 04:13:48.411328] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.008 [2024-04-19 04:13:48.421489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.008 qpair failed and we were unable to recover it. 00:22:34.008 [2024-04-19 04:13:48.431304] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.008 [2024-04-19 04:13:48.431341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.008 [2024-04-19 04:13:48.431355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.008 [2024-04-19 04:13:48.431361] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.008 [2024-04-19 04:13:48.431366] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.008 [2024-04-19 04:13:48.441766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.008 qpair failed and we were unable to recover it. 00:22:34.008 [2024-04-19 04:13:48.451316] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.008 [2024-04-19 04:13:48.451354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.008 [2024-04-19 04:13:48.451367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.008 [2024-04-19 04:13:48.451373] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.008 [2024-04-19 04:13:48.451378] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.008 [2024-04-19 04:13:48.461694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.008 qpair failed and we were unable to recover it. 00:22:34.008 [2024-04-19 04:13:48.471448] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.008 [2024-04-19 04:13:48.471483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.008 [2024-04-19 04:13:48.471497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.008 [2024-04-19 04:13:48.471503] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.008 [2024-04-19 04:13:48.471509] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.008 [2024-04-19 04:13:48.482028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.008 qpair failed and we were unable to recover it. 00:22:34.008 [2024-04-19 04:13:48.491471] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.008 [2024-04-19 04:13:48.491505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.008 [2024-04-19 04:13:48.491519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.008 [2024-04-19 04:13:48.491525] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.008 [2024-04-19 04:13:48.491530] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.008 [2024-04-19 04:13:48.501928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.008 qpair failed and we were unable to recover it. 00:22:34.008 [2024-04-19 04:13:48.511597] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.008 [2024-04-19 04:13:48.511631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.008 [2024-04-19 04:13:48.511646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.008 [2024-04-19 04:13:48.511652] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.008 [2024-04-19 04:13:48.511657] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.008 [2024-04-19 04:13:48.521876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.008 qpair failed and we were unable to recover it. 00:22:34.008 [2024-04-19 04:13:48.531589] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.008 [2024-04-19 04:13:48.531627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.008 [2024-04-19 04:13:48.531640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.008 [2024-04-19 04:13:48.531647] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.008 [2024-04-19 04:13:48.531652] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.541988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.551623] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.551660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.551677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.551686] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.551691] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.561994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.571738] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.571772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.571787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.571793] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.571799] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.582042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.591758] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.591801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.591815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.591822] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.591827] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.602213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.611921] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.611958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.611972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.611978] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.611983] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.622261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.631895] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.631931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.631945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.631952] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.631957] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.642215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.651919] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.651955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.651968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.651975] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.651980] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.662365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.671954] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.671988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.672002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.672008] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.672013] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.682287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.692076] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.692106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.692120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.692126] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.692131] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.702305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.712161] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.712193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.712207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.712213] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.712218] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.722504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.732160] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.732196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.732212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.732218] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.732223] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.742348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.752274] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.752308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.752322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.752327] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.752333] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.762752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.772325] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.772364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.772378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.772384] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.772390] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.266 [2024-04-19 04:13:48.782578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.266 qpair failed and we were unable to recover it. 00:22:34.266 [2024-04-19 04:13:48.792305] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.266 [2024-04-19 04:13:48.792340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.266 [2024-04-19 04:13:48.792357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.266 [2024-04-19 04:13:48.792364] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.266 [2024-04-19 04:13:48.792370] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.524 [2024-04-19 04:13:48.802916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.524 qpair failed and we were unable to recover it. 00:22:34.524 [2024-04-19 04:13:48.812415] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.524 [2024-04-19 04:13:48.812450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.524 [2024-04-19 04:13:48.812465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:48.812471] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:48.812479] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:48.822719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:48.832491] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:48.832530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:48.832544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:48.832550] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:48.832555] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:48.842875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:48.852625] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:48.852657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:48.852670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:48.852676] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:48.852681] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:48.862871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:48.872683] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:48.872719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:48.872733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:48.872739] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:48.872745] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:48.883084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:48.892603] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:48.892638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:48.892652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:48.892658] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:48.892663] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:48.902950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:48.912762] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:48.912799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:48.912813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:48.912819] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:48.912824] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:48.922935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:48.932780] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:48.932816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:48.932830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:48.932836] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:48.932842] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:48.943231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:48.952839] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:48.952867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:48.952880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:48.952887] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:48.952892] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:48.963157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:48.972940] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:48.972975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:48.972990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:48.972996] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:48.973001] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:48.983152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:48.993097] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:48.993129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:48.993143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:48.993152] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:48.993157] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:49.003249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:49.013123] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:49.013158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:49.013171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:49.013177] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:49.013183] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:49.023383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:49.033145] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:49.033179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:49.033193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:49.033199] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:49.033205] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.525 [2024-04-19 04:13:49.043403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.525 qpair failed and we were unable to recover it. 00:22:34.525 [2024-04-19 04:13:49.053092] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.525 [2024-04-19 04:13:49.053129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.525 [2024-04-19 04:13:49.053147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.525 [2024-04-19 04:13:49.053153] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.525 [2024-04-19 04:13:49.053158] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.785 [2024-04-19 04:13:49.063443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.785 qpair failed and we were unable to recover it. 00:22:34.785 [2024-04-19 04:13:49.073209] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.785 [2024-04-19 04:13:49.073250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.785 [2024-04-19 04:13:49.073265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.785 [2024-04-19 04:13:49.073271] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.785 [2024-04-19 04:13:49.073277] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.785 [2024-04-19 04:13:49.083591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.785 qpair failed and we were unable to recover it. 00:22:34.785 [2024-04-19 04:13:49.093267] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.785 [2024-04-19 04:13:49.093298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.785 [2024-04-19 04:13:49.093312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.785 [2024-04-19 04:13:49.093318] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.785 [2024-04-19 04:13:49.093323] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.785 [2024-04-19 04:13:49.103562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.785 qpair failed and we were unable to recover it. 00:22:34.785 [2024-04-19 04:13:49.113387] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.785 [2024-04-19 04:13:49.113419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.785 [2024-04-19 04:13:49.113433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.785 [2024-04-19 04:13:49.113439] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.785 [2024-04-19 04:13:49.113444] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.785 [2024-04-19 04:13:49.123623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.785 qpair failed and we were unable to recover it. 00:22:34.785 [2024-04-19 04:13:49.133417] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.785 [2024-04-19 04:13:49.133449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.785 [2024-04-19 04:13:49.133462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.785 [2024-04-19 04:13:49.133468] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.785 [2024-04-19 04:13:49.133473] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.785 [2024-04-19 04:13:49.143614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.785 qpair failed and we were unable to recover it. 00:22:34.785 [2024-04-19 04:13:49.153477] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.785 [2024-04-19 04:13:49.153517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.785 [2024-04-19 04:13:49.153531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.785 [2024-04-19 04:13:49.153536] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.785 [2024-04-19 04:13:49.153542] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.785 [2024-04-19 04:13:49.163736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.785 qpair failed and we were unable to recover it. 00:22:34.785 [2024-04-19 04:13:49.173450] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.785 [2024-04-19 04:13:49.173483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.785 [2024-04-19 04:13:49.173500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.785 [2024-04-19 04:13:49.173506] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.785 [2024-04-19 04:13:49.173511] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.785 [2024-04-19 04:13:49.183734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.785 qpair failed and we were unable to recover it. 00:22:34.785 [2024-04-19 04:13:49.193532] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.785 [2024-04-19 04:13:49.193566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.785 [2024-04-19 04:13:49.193580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.785 [2024-04-19 04:13:49.193586] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.785 [2024-04-19 04:13:49.193591] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.785 [2024-04-19 04:13:49.203827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.785 qpair failed and we were unable to recover it. 00:22:34.785 [2024-04-19 04:13:49.213643] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.785 [2024-04-19 04:13:49.213679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.785 [2024-04-19 04:13:49.213693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.785 [2024-04-19 04:13:49.213699] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.786 [2024-04-19 04:13:49.213705] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.786 [2024-04-19 04:13:49.224105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.786 qpair failed and we were unable to recover it. 00:22:34.786 [2024-04-19 04:13:49.233710] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.786 [2024-04-19 04:13:49.233749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.786 [2024-04-19 04:13:49.233763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.786 [2024-04-19 04:13:49.233769] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.786 [2024-04-19 04:13:49.233774] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.786 [2024-04-19 04:13:49.244137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.786 qpair failed and we were unable to recover it. 00:22:34.786 [2024-04-19 04:13:49.253709] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.786 [2024-04-19 04:13:49.253745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.786 [2024-04-19 04:13:49.253759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.786 [2024-04-19 04:13:49.253766] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.786 [2024-04-19 04:13:49.253774] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.786 [2024-04-19 04:13:49.263957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.786 qpair failed and we were unable to recover it. 00:22:34.786 [2024-04-19 04:13:49.273764] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.786 [2024-04-19 04:13:49.273799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.786 [2024-04-19 04:13:49.273812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.786 [2024-04-19 04:13:49.273819] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.786 [2024-04-19 04:13:49.273824] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.786 [2024-04-19 04:13:49.284127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.786 qpair failed and we were unable to recover it. 00:22:34.786 [2024-04-19 04:13:49.293866] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:34.786 [2024-04-19 04:13:49.293901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:34.786 [2024-04-19 04:13:49.293915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:34.786 [2024-04-19 04:13:49.293921] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:34.786 [2024-04-19 04:13:49.293926] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:34.786 [2024-04-19 04:13:49.304070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.786 qpair failed and we were unable to recover it. 00:22:35.046 [2024-04-19 04:13:49.313920] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.046 [2024-04-19 04:13:49.313960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.046 [2024-04-19 04:13:49.313977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.046 [2024-04-19 04:13:49.313984] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.046 [2024-04-19 04:13:49.313990] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.046 [2024-04-19 04:13:49.324292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.046 qpair failed and we were unable to recover it. 00:22:35.046 [2024-04-19 04:13:49.333865] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.046 [2024-04-19 04:13:49.333896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.046 [2024-04-19 04:13:49.333912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.046 [2024-04-19 04:13:49.333919] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.046 [2024-04-19 04:13:49.333924] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.046 [2024-04-19 04:13:49.344234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.046 qpair failed and we were unable to recover it. 00:22:35.046 [2024-04-19 04:13:49.353915] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.046 [2024-04-19 04:13:49.353950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.046 [2024-04-19 04:13:49.353964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.046 [2024-04-19 04:13:49.353970] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.046 [2024-04-19 04:13:49.353975] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.046 [2024-04-19 04:13:49.364239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.046 qpair failed and we were unable to recover it. 00:22:35.046 [2024-04-19 04:13:49.374080] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.046 [2024-04-19 04:13:49.374113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.046 [2024-04-19 04:13:49.374127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.046 [2024-04-19 04:13:49.374133] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.046 [2024-04-19 04:13:49.374138] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.046 [2024-04-19 04:13:49.384362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.046 qpair failed and we were unable to recover it. 00:22:35.046 [2024-04-19 04:13:49.394042] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.046 [2024-04-19 04:13:49.394077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.046 [2024-04-19 04:13:49.394091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.046 [2024-04-19 04:13:49.394097] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.046 [2024-04-19 04:13:49.394103] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.046 [2024-04-19 04:13:49.404676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.046 qpair failed and we were unable to recover it. 00:22:35.046 [2024-04-19 04:13:49.414087] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.046 [2024-04-19 04:13:49.414118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.046 [2024-04-19 04:13:49.414131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.046 [2024-04-19 04:13:49.414137] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.046 [2024-04-19 04:13:49.414143] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.046 [2024-04-19 04:13:49.424546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.046 qpair failed and we were unable to recover it. 00:22:35.046 [2024-04-19 04:13:49.434159] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.046 [2024-04-19 04:13:49.434193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.046 [2024-04-19 04:13:49.434207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.046 [2024-04-19 04:13:49.434216] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.046 [2024-04-19 04:13:49.434221] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.046 [2024-04-19 04:13:49.444495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.046 qpair failed and we were unable to recover it. 00:22:35.046 [2024-04-19 04:13:49.454168] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.046 [2024-04-19 04:13:49.454206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.046 [2024-04-19 04:13:49.454219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.046 [2024-04-19 04:13:49.454225] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.046 [2024-04-19 04:13:49.454230] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.046 [2024-04-19 04:13:49.464684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.046 qpair failed and we were unable to recover it. 00:22:35.046 [2024-04-19 04:13:49.474293] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.046 [2024-04-19 04:13:49.474328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.046 [2024-04-19 04:13:49.474342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.046 [2024-04-19 04:13:49.474348] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.046 [2024-04-19 04:13:49.474353] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.046 [2024-04-19 04:13:49.484651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.046 qpair failed and we were unable to recover it. 00:22:35.046 [2024-04-19 04:13:49.494394] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.046 [2024-04-19 04:13:49.494430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.046 [2024-04-19 04:13:49.494445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.046 [2024-04-19 04:13:49.494450] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.046 [2024-04-19 04:13:49.494456] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.046 [2024-04-19 04:13:49.504814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.046 qpair failed and we were unable to recover it. 00:22:35.046 [2024-04-19 04:13:49.514577] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.046 [2024-04-19 04:13:49.514613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.046 [2024-04-19 04:13:49.514627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.046 [2024-04-19 04:13:49.514634] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.047 [2024-04-19 04:13:49.514639] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.047 [2024-04-19 04:13:49.524704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.047 qpair failed and we were unable to recover it. 00:22:35.047 [2024-04-19 04:13:49.534442] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.047 [2024-04-19 04:13:49.534479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.047 [2024-04-19 04:13:49.534493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.047 [2024-04-19 04:13:49.534499] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.047 [2024-04-19 04:13:49.534504] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.047 [2024-04-19 04:13:49.544839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.047 qpair failed and we were unable to recover it. 00:22:35.047 [2024-04-19 04:13:49.554591] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.047 [2024-04-19 04:13:49.554629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.047 [2024-04-19 04:13:49.554643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.047 [2024-04-19 04:13:49.554648] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.047 [2024-04-19 04:13:49.554653] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.047 [2024-04-19 04:13:49.564849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.047 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.574666] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.574704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.574741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.574751] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.306 [2024-04-19 04:13:49.574760] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.306 [2024-04-19 04:13:49.585069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.306 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.594679] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.594709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.594725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.594731] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.306 [2024-04-19 04:13:49.594736] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.306 [2024-04-19 04:13:49.605089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.306 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.614610] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.614646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.614662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.614668] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.306 [2024-04-19 04:13:49.614673] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.306 [2024-04-19 04:13:49.625025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.306 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.634741] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.634777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.634791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.634797] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.306 [2024-04-19 04:13:49.634802] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.306 [2024-04-19 04:13:49.645239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.306 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.654872] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.654910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.654923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.654929] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.306 [2024-04-19 04:13:49.654935] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.306 [2024-04-19 04:13:49.665277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.306 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.674815] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.674843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.674856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.674862] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.306 [2024-04-19 04:13:49.674867] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.306 [2024-04-19 04:13:49.685178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.306 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.694942] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.694977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.694991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.694997] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.306 [2024-04-19 04:13:49.695007] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.306 [2024-04-19 04:13:49.705305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.306 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.714947] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.714981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.714994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.715000] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.306 [2024-04-19 04:13:49.715005] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.306 [2024-04-19 04:13:49.725218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.306 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.735036] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.735074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.735087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.735093] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.306 [2024-04-19 04:13:49.735098] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.306 [2024-04-19 04:13:49.745418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.306 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.755081] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.755117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.755130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.755136] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.306 [2024-04-19 04:13:49.755141] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.306 [2024-04-19 04:13:49.765457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.306 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.775132] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.775167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.775180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.775187] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.306 [2024-04-19 04:13:49.775192] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.306 [2024-04-19 04:13:49.785750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.306 qpair failed and we were unable to recover it. 00:22:35.306 [2024-04-19 04:13:49.795185] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.306 [2024-04-19 04:13:49.795223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.306 [2024-04-19 04:13:49.795237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.306 [2024-04-19 04:13:49.795243] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.307 [2024-04-19 04:13:49.795248] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.307 [2024-04-19 04:13:49.805734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.307 qpair failed and we were unable to recover it. 00:22:35.307 [2024-04-19 04:13:49.815330] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.307 [2024-04-19 04:13:49.815365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.307 [2024-04-19 04:13:49.815378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.307 [2024-04-19 04:13:49.815384] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.307 [2024-04-19 04:13:49.815389] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.307 [2024-04-19 04:13:49.825755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.307 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:49.835417] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.575 [2024-04-19 04:13:49.835462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.575 [2024-04-19 04:13:49.835481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.575 [2024-04-19 04:13:49.835487] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.575 [2024-04-19 04:13:49.835493] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.575 [2024-04-19 04:13:49.845768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.575 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:49.855423] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.575 [2024-04-19 04:13:49.855458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.575 [2024-04-19 04:13:49.855472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.575 [2024-04-19 04:13:49.855478] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.575 [2024-04-19 04:13:49.855484] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.575 [2024-04-19 04:13:49.865964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.575 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:49.875424] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.575 [2024-04-19 04:13:49.875461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.575 [2024-04-19 04:13:49.875475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.575 [2024-04-19 04:13:49.875484] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.575 [2024-04-19 04:13:49.875489] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.575 [2024-04-19 04:13:49.885746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.575 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:49.895492] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.575 [2024-04-19 04:13:49.895532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.575 [2024-04-19 04:13:49.895546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.575 [2024-04-19 04:13:49.895552] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.575 [2024-04-19 04:13:49.895558] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.575 [2024-04-19 04:13:49.906026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.575 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:49.915559] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.575 [2024-04-19 04:13:49.915590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.575 [2024-04-19 04:13:49.915604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.575 [2024-04-19 04:13:49.915610] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.575 [2024-04-19 04:13:49.915615] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.575 [2024-04-19 04:13:49.925868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.575 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:49.935597] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.575 [2024-04-19 04:13:49.935632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.575 [2024-04-19 04:13:49.935645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.575 [2024-04-19 04:13:49.935651] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.575 [2024-04-19 04:13:49.935656] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.575 [2024-04-19 04:13:49.946192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.575 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:49.955754] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.575 [2024-04-19 04:13:49.955790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.575 [2024-04-19 04:13:49.955803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.575 [2024-04-19 04:13:49.955809] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.575 [2024-04-19 04:13:49.955814] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.575 [2024-04-19 04:13:49.965864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.575 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:49.975679] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.575 [2024-04-19 04:13:49.975712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.575 [2024-04-19 04:13:49.975726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.575 [2024-04-19 04:13:49.975732] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.575 [2024-04-19 04:13:49.975738] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.575 [2024-04-19 04:13:49.986023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.575 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:49.995788] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.575 [2024-04-19 04:13:49.995821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.575 [2024-04-19 04:13:49.995835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.575 [2024-04-19 04:13:49.995841] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.575 [2024-04-19 04:13:49.995846] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.575 [2024-04-19 04:13:50.006417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.575 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:50.015758] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.575 [2024-04-19 04:13:50.015798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.575 [2024-04-19 04:13:50.015812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.575 [2024-04-19 04:13:50.015818] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.575 [2024-04-19 04:13:50.015823] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.575 [2024-04-19 04:13:50.026112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.575 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:50.035814] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.575 [2024-04-19 04:13:50.035850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.575 [2024-04-19 04:13:50.035865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.575 [2024-04-19 04:13:50.035872] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.575 [2024-04-19 04:13:50.035877] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.575 [2024-04-19 04:13:50.046123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.575 qpair failed and we were unable to recover it. 00:22:35.575 [2024-04-19 04:13:50.055949] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.576 [2024-04-19 04:13:50.055993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.576 [2024-04-19 04:13:50.056010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.576 [2024-04-19 04:13:50.056016] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.576 [2024-04-19 04:13:50.056021] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.576 [2024-04-19 04:13:50.066255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.576 qpair failed and we were unable to recover it. 00:22:35.576 [2024-04-19 04:13:50.075993] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.576 [2024-04-19 04:13:50.076030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.576 [2024-04-19 04:13:50.076044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.576 [2024-04-19 04:13:50.076051] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.576 [2024-04-19 04:13:50.076056] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.576 [2024-04-19 04:13:50.086391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.576 qpair failed and we were unable to recover it. 00:22:35.576 [2024-04-19 04:13:50.096177] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.576 [2024-04-19 04:13:50.096219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.576 [2024-04-19 04:13:50.096237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.576 [2024-04-19 04:13:50.096244] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.576 [2024-04-19 04:13:50.096250] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.840 [2024-04-19 04:13:50.106380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.840 qpair failed and we were unable to recover it. 00:22:35.840 [2024-04-19 04:13:50.116053] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.840 [2024-04-19 04:13:50.116095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.840 [2024-04-19 04:13:50.116112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.840 [2024-04-19 04:13:50.116119] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.840 [2024-04-19 04:13:50.116124] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.840 [2024-04-19 04:13:50.126498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.840 qpair failed and we were unable to recover it. 00:22:35.840 [2024-04-19 04:13:50.136132] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.840 [2024-04-19 04:13:50.136170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.840 [2024-04-19 04:13:50.136183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.840 [2024-04-19 04:13:50.136190] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.840 [2024-04-19 04:13:50.136198] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.840 [2024-04-19 04:13:50.146425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.840 qpair failed and we were unable to recover it. 00:22:35.840 [2024-04-19 04:13:50.156236] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.840 [2024-04-19 04:13:50.156270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.840 [2024-04-19 04:13:50.156283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.840 [2024-04-19 04:13:50.156290] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.840 [2024-04-19 04:13:50.156295] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.840 [2024-04-19 04:13:50.166595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.840 qpair failed and we were unable to recover it. 00:22:35.840 [2024-04-19 04:13:50.176217] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.840 [2024-04-19 04:13:50.176255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.840 [2024-04-19 04:13:50.176268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.840 [2024-04-19 04:13:50.176274] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.840 [2024-04-19 04:13:50.176280] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.840 [2024-04-19 04:13:50.186391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.840 qpair failed and we were unable to recover it. 00:22:35.840 [2024-04-19 04:13:50.196303] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.840 [2024-04-19 04:13:50.196343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.840 [2024-04-19 04:13:50.196357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.840 [2024-04-19 04:13:50.196362] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.840 [2024-04-19 04:13:50.196368] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.840 [2024-04-19 04:13:50.206816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.840 qpair failed and we were unable to recover it. 00:22:35.840 [2024-04-19 04:13:50.216220] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.840 [2024-04-19 04:13:50.216257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.840 [2024-04-19 04:13:50.216273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.840 [2024-04-19 04:13:50.216279] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.840 [2024-04-19 04:13:50.216284] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:35.840 [2024-04-19 04:13:50.226718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:35.840 qpair failed and we were unable to recover it. 00:22:35.840 [2024-04-19 04:13:50.236483] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.840 [2024-04-19 04:13:50.236525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.840 [2024-04-19 04:13:50.236548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.840 [2024-04-19 04:13:50.236558] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.840 [2024-04-19 04:13:50.236566] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:35.840 [2024-04-19 04:13:50.246825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:35.840 qpair failed and we were unable to recover it. 00:22:35.840 [2024-04-19 04:13:50.256552] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.840 [2024-04-19 04:13:50.256589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.840 [2024-04-19 04:13:50.256603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.840 [2024-04-19 04:13:50.256610] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.840 [2024-04-19 04:13:50.256615] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:35.840 [2024-04-19 04:13:50.266925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:35.840 qpair failed and we were unable to recover it. 00:22:35.840 [2024-04-19 04:13:50.276541] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.840 [2024-04-19 04:13:50.276581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.840 [2024-04-19 04:13:50.276594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.840 [2024-04-19 04:13:50.276600] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.840 [2024-04-19 04:13:50.276605] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:35.840 [2024-04-19 04:13:50.286804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:35.840 qpair failed and we were unable to recover it. 00:22:35.840 [2024-04-19 04:13:50.296640] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.840 [2024-04-19 04:13:50.296677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.840 [2024-04-19 04:13:50.296692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.840 [2024-04-19 04:13:50.296698] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.840 [2024-04-19 04:13:50.296703] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:35.841 [2024-04-19 04:13:50.307123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:35.841 qpair failed and we were unable to recover it. 00:22:35.841 [2024-04-19 04:13:50.316665] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.841 [2024-04-19 04:13:50.316702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.841 [2024-04-19 04:13:50.316716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.841 [2024-04-19 04:13:50.316727] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.841 [2024-04-19 04:13:50.316733] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:35.841 [2024-04-19 04:13:50.327192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:35.841 qpair failed and we were unable to recover it. 00:22:35.841 [2024-04-19 04:13:50.336740] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.841 [2024-04-19 04:13:50.336775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.841 [2024-04-19 04:13:50.336790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.841 [2024-04-19 04:13:50.336797] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.841 [2024-04-19 04:13:50.336803] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:35.841 [2024-04-19 04:13:50.347144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:35.841 qpair failed and we were unable to recover it. 00:22:35.841 [2024-04-19 04:13:50.356818] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:35.841 [2024-04-19 04:13:50.356853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:35.841 [2024-04-19 04:13:50.356867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:35.841 [2024-04-19 04:13:50.356873] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:35.841 [2024-04-19 04:13:50.356878] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:35.841 [2024-04-19 04:13:50.367209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:35.841 qpair failed and we were unable to recover it. 00:22:36.098 [2024-04-19 04:13:50.376800] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.098 [2024-04-19 04:13:50.376840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.098 [2024-04-19 04:13:50.376855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.098 [2024-04-19 04:13:50.376861] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.098 [2024-04-19 04:13:50.376866] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.098 [2024-04-19 04:13:50.387019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.098 qpair failed and we were unable to recover it. 00:22:36.098 [2024-04-19 04:13:50.396829] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.098 [2024-04-19 04:13:50.396868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.098 [2024-04-19 04:13:50.396882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.098 [2024-04-19 04:13:50.396888] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.098 [2024-04-19 04:13:50.396894] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.098 [2024-04-19 04:13:50.407369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.098 qpair failed and we were unable to recover it. 00:22:36.098 [2024-04-19 04:13:50.416877] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.098 [2024-04-19 04:13:50.416913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.098 [2024-04-19 04:13:50.416928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.098 [2024-04-19 04:13:50.416934] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.098 [2024-04-19 04:13:50.416939] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.098 [2024-04-19 04:13:50.427248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.098 qpair failed and we were unable to recover it. 00:22:36.098 [2024-04-19 04:13:50.436949] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.098 [2024-04-19 04:13:50.436987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.098 [2024-04-19 04:13:50.437001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.098 [2024-04-19 04:13:50.437007] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.098 [2024-04-19 04:13:50.437012] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.098 [2024-04-19 04:13:50.447255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.098 qpair failed and we were unable to recover it. 00:22:36.099 [2024-04-19 04:13:50.457083] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.099 [2024-04-19 04:13:50.457118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.099 [2024-04-19 04:13:50.457132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.099 [2024-04-19 04:13:50.457138] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.099 [2024-04-19 04:13:50.457143] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.099 [2024-04-19 04:13:50.467589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.099 qpair failed and we were unable to recover it. 00:22:36.099 [2024-04-19 04:13:50.477124] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.099 [2024-04-19 04:13:50.477155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.099 [2024-04-19 04:13:50.477168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.099 [2024-04-19 04:13:50.477174] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.099 [2024-04-19 04:13:50.477180] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.099 [2024-04-19 04:13:50.487627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.099 qpair failed and we were unable to recover it. 00:22:36.099 [2024-04-19 04:13:50.497146] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.099 [2024-04-19 04:13:50.497181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.099 [2024-04-19 04:13:50.497198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.099 [2024-04-19 04:13:50.497204] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.099 [2024-04-19 04:13:50.497209] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.099 [2024-04-19 04:13:50.507500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.099 qpair failed and we were unable to recover it. 00:22:36.099 [2024-04-19 04:13:50.517199] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.099 [2024-04-19 04:13:50.517237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.099 [2024-04-19 04:13:50.517250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.099 [2024-04-19 04:13:50.517256] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.099 [2024-04-19 04:13:50.517261] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.099 [2024-04-19 04:13:50.527561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.099 qpair failed and we were unable to recover it. 00:22:36.099 [2024-04-19 04:13:50.537294] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.099 [2024-04-19 04:13:50.537328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.099 [2024-04-19 04:13:50.537342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.099 [2024-04-19 04:13:50.537348] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.099 [2024-04-19 04:13:50.537353] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.099 [2024-04-19 04:13:50.547612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.099 qpair failed and we were unable to recover it. 00:22:36.099 [2024-04-19 04:13:50.557337] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.099 [2024-04-19 04:13:50.557369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.099 [2024-04-19 04:13:50.557382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.099 [2024-04-19 04:13:50.557388] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.099 [2024-04-19 04:13:50.557393] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.099 [2024-04-19 04:13:50.567745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.099 qpair failed and we were unable to recover it. 00:22:36.099 [2024-04-19 04:13:50.577426] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.099 [2024-04-19 04:13:50.577462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.099 [2024-04-19 04:13:50.577476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.099 [2024-04-19 04:13:50.577482] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.099 [2024-04-19 04:13:50.577490] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.099 [2024-04-19 04:13:50.587742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.099 qpair failed and we were unable to recover it. 00:22:36.099 [2024-04-19 04:13:50.597501] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.099 [2024-04-19 04:13:50.597541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.099 [2024-04-19 04:13:50.597554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.099 [2024-04-19 04:13:50.597560] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.099 [2024-04-19 04:13:50.597566] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.099 [2024-04-19 04:13:50.607848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.099 qpair failed and we were unable to recover it. 00:22:36.099 [2024-04-19 04:13:50.617465] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.099 [2024-04-19 04:13:50.617505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.099 [2024-04-19 04:13:50.617518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.099 [2024-04-19 04:13:50.617525] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.099 [2024-04-19 04:13:50.617530] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.358 [2024-04-19 04:13:50.627927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.358 qpair failed and we were unable to recover it. 00:22:36.358 [2024-04-19 04:13:50.637567] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.358 [2024-04-19 04:13:50.637601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.358 [2024-04-19 04:13:50.637614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.358 [2024-04-19 04:13:50.637620] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.358 [2024-04-19 04:13:50.637625] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.358 [2024-04-19 04:13:50.647788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.358 qpair failed and we were unable to recover it. 00:22:36.358 [2024-04-19 04:13:50.657637] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.358 [2024-04-19 04:13:50.657673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.358 [2024-04-19 04:13:50.657686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.358 [2024-04-19 04:13:50.657692] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.358 [2024-04-19 04:13:50.657698] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.359 [2024-04-19 04:13:50.667866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.359 qpair failed and we were unable to recover it. 00:22:36.359 [2024-04-19 04:13:50.677679] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.359 [2024-04-19 04:13:50.677713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.359 [2024-04-19 04:13:50.677726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.359 [2024-04-19 04:13:50.677731] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.359 [2024-04-19 04:13:50.677737] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.359 [2024-04-19 04:13:50.688089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.359 qpair failed and we were unable to recover it. 00:22:36.359 [2024-04-19 04:13:50.697601] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.359 [2024-04-19 04:13:50.697640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.359 [2024-04-19 04:13:50.697653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.359 [2024-04-19 04:13:50.697659] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.359 [2024-04-19 04:13:50.697664] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.359 [2024-04-19 04:13:50.708026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.359 qpair failed and we were unable to recover it. 00:22:36.359 [2024-04-19 04:13:50.717825] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.359 [2024-04-19 04:13:50.717860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.359 [2024-04-19 04:13:50.717873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.359 [2024-04-19 04:13:50.717879] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.359 [2024-04-19 04:13:50.717885] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.359 [2024-04-19 04:13:50.728082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.359 qpair failed and we were unable to recover it. 00:22:36.359 [2024-04-19 04:13:50.738035] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.359 [2024-04-19 04:13:50.738070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.359 [2024-04-19 04:13:50.738082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.359 [2024-04-19 04:13:50.738088] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.359 [2024-04-19 04:13:50.738094] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.359 [2024-04-19 04:13:50.748206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.359 qpair failed and we were unable to recover it. 00:22:36.359 [2024-04-19 04:13:50.757915] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.359 [2024-04-19 04:13:50.757954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.359 [2024-04-19 04:13:50.757968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.359 [2024-04-19 04:13:50.757978] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.359 [2024-04-19 04:13:50.757983] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.359 [2024-04-19 04:13:50.768303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.359 qpair failed and we were unable to recover it. 00:22:36.359 [2024-04-19 04:13:50.778027] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.359 [2024-04-19 04:13:50.778061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.359 [2024-04-19 04:13:50.778074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.359 [2024-04-19 04:13:50.778080] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.359 [2024-04-19 04:13:50.778085] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.359 [2024-04-19 04:13:50.788383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.359 qpair failed and we were unable to recover it. 00:22:36.359 [2024-04-19 04:13:50.798101] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.359 [2024-04-19 04:13:50.798136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.359 [2024-04-19 04:13:50.798149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.359 [2024-04-19 04:13:50.798155] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.359 [2024-04-19 04:13:50.798161] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.359 [2024-04-19 04:13:50.808428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.359 qpair failed and we were unable to recover it. 00:22:36.359 [2024-04-19 04:13:50.818117] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.359 [2024-04-19 04:13:50.818151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.359 [2024-04-19 04:13:50.818163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.359 [2024-04-19 04:13:50.818169] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.359 [2024-04-19 04:13:50.818175] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.359 [2024-04-19 04:13:50.828421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.359 qpair failed and we were unable to recover it. 00:22:36.359 [2024-04-19 04:13:50.838142] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.359 [2024-04-19 04:13:50.838175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.359 [2024-04-19 04:13:50.838188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.359 [2024-04-19 04:13:50.838194] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.359 [2024-04-19 04:13:50.838199] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.359 [2024-04-19 04:13:50.848404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.359 qpair failed and we were unable to recover it. 00:22:36.359 [2024-04-19 04:13:50.858252] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.359 [2024-04-19 04:13:50.858290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.359 [2024-04-19 04:13:50.858304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.359 [2024-04-19 04:13:50.858310] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.359 [2024-04-19 04:13:50.858315] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.359 [2024-04-19 04:13:50.868536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.359 qpair failed and we were unable to recover it. 00:22:36.359 [2024-04-19 04:13:50.878233] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.359 [2024-04-19 04:13:50.878271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.359 [2024-04-19 04:13:50.878284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.359 [2024-04-19 04:13:50.878291] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.359 [2024-04-19 04:13:50.878296] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.618 [2024-04-19 04:13:50.888729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.618 qpair failed and we were unable to recover it. 00:22:36.618 [2024-04-19 04:13:50.898312] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.618 [2024-04-19 04:13:50.898346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.618 [2024-04-19 04:13:50.898359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.618 [2024-04-19 04:13:50.898364] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.618 [2024-04-19 04:13:50.898370] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.618 [2024-04-19 04:13:50.908759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.618 qpair failed and we were unable to recover it. 00:22:36.618 [2024-04-19 04:13:50.918441] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.618 [2024-04-19 04:13:50.918474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.618 [2024-04-19 04:13:50.918487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.618 [2024-04-19 04:13:50.918493] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.618 [2024-04-19 04:13:50.918498] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.618 [2024-04-19 04:13:50.928685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.618 qpair failed and we were unable to recover it. 00:22:36.618 [2024-04-19 04:13:50.938451] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.618 [2024-04-19 04:13:50.938481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.618 [2024-04-19 04:13:50.938497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.618 [2024-04-19 04:13:50.938503] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.618 [2024-04-19 04:13:50.938508] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.618 [2024-04-19 04:13:50.948915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.618 qpair failed and we were unable to recover it. 00:22:36.618 [2024-04-19 04:13:50.958490] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.618 [2024-04-19 04:13:50.958520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.618 [2024-04-19 04:13:50.958534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.618 [2024-04-19 04:13:50.958540] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.618 [2024-04-19 04:13:50.958545] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.618 [2024-04-19 04:13:50.968825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.618 qpair failed and we were unable to recover it. 00:22:36.618 [2024-04-19 04:13:50.978558] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.618 [2024-04-19 04:13:50.978595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.618 [2024-04-19 04:13:50.978607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.618 [2024-04-19 04:13:50.978613] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.618 [2024-04-19 04:13:50.978618] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.618 [2024-04-19 04:13:50.988947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.618 qpair failed and we were unable to recover it. 00:22:36.618 [2024-04-19 04:13:50.998557] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.618 [2024-04-19 04:13:50.998599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.618 [2024-04-19 04:13:50.998613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.618 [2024-04-19 04:13:50.998619] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.618 [2024-04-19 04:13:50.998624] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.618 [2024-04-19 04:13:51.008930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.618 qpair failed and we were unable to recover it. 00:22:36.618 [2024-04-19 04:13:51.018707] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.618 [2024-04-19 04:13:51.018737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.618 [2024-04-19 04:13:51.018750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.618 [2024-04-19 04:13:51.018756] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.618 [2024-04-19 04:13:51.018764] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.618 [2024-04-19 04:13:51.029026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.618 qpair failed and we were unable to recover it. 00:22:36.618 [2024-04-19 04:13:51.038709] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.618 [2024-04-19 04:13:51.038741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.618 [2024-04-19 04:13:51.038754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.618 [2024-04-19 04:13:51.038760] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.618 [2024-04-19 04:13:51.038766] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.618 [2024-04-19 04:13:51.049049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.618 qpair failed and we were unable to recover it. 00:22:36.618 [2024-04-19 04:13:51.058719] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.618 [2024-04-19 04:13:51.058756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.618 [2024-04-19 04:13:51.058769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.618 [2024-04-19 04:13:51.058775] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.618 [2024-04-19 04:13:51.058780] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.619 [2024-04-19 04:13:51.069155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.619 qpair failed and we were unable to recover it. 00:22:36.619 [2024-04-19 04:13:51.078782] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.619 [2024-04-19 04:13:51.078816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.619 [2024-04-19 04:13:51.078829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.619 [2024-04-19 04:13:51.078835] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.619 [2024-04-19 04:13:51.078840] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.619 [2024-04-19 04:13:51.089276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.619 qpair failed and we were unable to recover it. 00:22:36.619 [2024-04-19 04:13:51.098919] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.619 [2024-04-19 04:13:51.098954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.619 [2024-04-19 04:13:51.098967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.619 [2024-04-19 04:13:51.098973] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.619 [2024-04-19 04:13:51.098978] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.619 [2024-04-19 04:13:51.109421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.619 qpair failed and we were unable to recover it. 00:22:36.619 [2024-04-19 04:13:51.118921] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.619 [2024-04-19 04:13:51.118956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.619 [2024-04-19 04:13:51.118970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.619 [2024-04-19 04:13:51.118976] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.619 [2024-04-19 04:13:51.118981] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.619 [2024-04-19 04:13:51.129245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.619 qpair failed and we were unable to recover it. 00:22:36.619 [2024-04-19 04:13:51.138989] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.619 [2024-04-19 04:13:51.139026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.619 [2024-04-19 04:13:51.139039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.619 [2024-04-19 04:13:51.139045] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.619 [2024-04-19 04:13:51.139050] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.876 [2024-04-19 04:13:51.149429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.876 qpair failed and we were unable to recover it. 00:22:36.876 [2024-04-19 04:13:51.159004] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.876 [2024-04-19 04:13:51.159042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.876 [2024-04-19 04:13:51.159055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.876 [2024-04-19 04:13:51.159061] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.876 [2024-04-19 04:13:51.159066] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.876 [2024-04-19 04:13:51.169411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.876 qpair failed and we were unable to recover it. 00:22:36.876 [2024-04-19 04:13:51.179134] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.876 [2024-04-19 04:13:51.179166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.876 [2024-04-19 04:13:51.179179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.876 [2024-04-19 04:13:51.179185] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.876 [2024-04-19 04:13:51.179190] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.876 [2024-04-19 04:13:51.189452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.876 qpair failed and we were unable to recover it. 00:22:36.876 [2024-04-19 04:13:51.199069] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.876 [2024-04-19 04:13:51.199106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.876 [2024-04-19 04:13:51.199119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.876 [2024-04-19 04:13:51.199128] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.876 [2024-04-19 04:13:51.199134] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.877 [2024-04-19 04:13:51.209604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.877 qpair failed and we were unable to recover it. 00:22:36.877 [2024-04-19 04:13:51.219170] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.877 [2024-04-19 04:13:51.219207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.877 [2024-04-19 04:13:51.219220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.877 [2024-04-19 04:13:51.219226] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.877 [2024-04-19 04:13:51.219232] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.877 [2024-04-19 04:13:51.229520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.877 qpair failed and we were unable to recover it. 00:22:36.877 [2024-04-19 04:13:51.239213] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.877 [2024-04-19 04:13:51.239256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.877 [2024-04-19 04:13:51.239269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.877 [2024-04-19 04:13:51.239274] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.877 [2024-04-19 04:13:51.239280] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.877 [2024-04-19 04:13:51.249866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.877 qpair failed and we were unable to recover it. 00:22:36.877 [2024-04-19 04:13:51.259323] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.877 [2024-04-19 04:13:51.259353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.877 [2024-04-19 04:13:51.259366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.877 [2024-04-19 04:13:51.259372] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.877 [2024-04-19 04:13:51.259377] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.877 [2024-04-19 04:13:51.269572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.877 qpair failed and we were unable to recover it. 00:22:36.877 [2024-04-19 04:13:51.279418] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.877 [2024-04-19 04:13:51.279450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.877 [2024-04-19 04:13:51.279463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.877 [2024-04-19 04:13:51.279469] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.877 [2024-04-19 04:13:51.279474] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.877 [2024-04-19 04:13:51.289797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.877 qpair failed and we were unable to recover it. 00:22:36.877 [2024-04-19 04:13:51.299411] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.877 [2024-04-19 04:13:51.299446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.877 [2024-04-19 04:13:51.299459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.877 [2024-04-19 04:13:51.299465] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.877 [2024-04-19 04:13:51.299470] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.877 [2024-04-19 04:13:51.309775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.877 qpair failed and we were unable to recover it. 00:22:36.877 [2024-04-19 04:13:51.319437] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.877 [2024-04-19 04:13:51.319472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.877 [2024-04-19 04:13:51.319486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.877 [2024-04-19 04:13:51.319493] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.877 [2024-04-19 04:13:51.319498] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.877 [2024-04-19 04:13:51.329899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.877 qpair failed and we were unable to recover it. 00:22:36.877 [2024-04-19 04:13:51.339519] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.877 [2024-04-19 04:13:51.339553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.877 [2024-04-19 04:13:51.339567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.877 [2024-04-19 04:13:51.339572] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.877 [2024-04-19 04:13:51.339578] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.877 [2024-04-19 04:13:51.349906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.877 qpair failed and we were unable to recover it. 00:22:36.877 [2024-04-19 04:13:51.359556] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.877 [2024-04-19 04:13:51.359592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.877 [2024-04-19 04:13:51.359605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.877 [2024-04-19 04:13:51.359611] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.877 [2024-04-19 04:13:51.359616] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.877 [2024-04-19 04:13:51.369875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.877 qpair failed and we were unable to recover it. 00:22:36.877 [2024-04-19 04:13:51.379697] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.877 [2024-04-19 04:13:51.379731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.877 [2024-04-19 04:13:51.379746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.877 [2024-04-19 04:13:51.379752] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.877 [2024-04-19 04:13:51.379758] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:36.877 [2024-04-19 04:13:51.390038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:36.877 qpair failed and we were unable to recover it. 00:22:36.877 [2024-04-19 04:13:51.399695] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:36.877 [2024-04-19 04:13:51.399730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:36.877 [2024-04-19 04:13:51.399744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:36.877 [2024-04-19 04:13:51.399751] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:36.877 [2024-04-19 04:13:51.399756] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:37.136 [2024-04-19 04:13:51.410119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.136 qpair failed and we were unable to recover it. 00:22:37.136 [2024-04-19 04:13:51.419804] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.136 [2024-04-19 04:13:51.419836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.136 [2024-04-19 04:13:51.419851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.136 [2024-04-19 04:13:51.419857] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.136 [2024-04-19 04:13:51.419863] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:37.136 [2024-04-19 04:13:51.430095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.136 qpair failed and we were unable to recover it. 00:22:37.136 [2024-04-19 04:13:51.439887] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.136 [2024-04-19 04:13:51.439925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.136 [2024-04-19 04:13:51.439947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.136 [2024-04-19 04:13:51.439957] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.136 [2024-04-19 04:13:51.439964] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.136 [2024-04-19 04:13:51.450370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.136 qpair failed and we were unable to recover it. 00:22:37.136 [2024-04-19 04:13:51.459877] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.136 [2024-04-19 04:13:51.459913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.136 [2024-04-19 04:13:51.459928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.136 [2024-04-19 04:13:51.459935] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.136 [2024-04-19 04:13:51.459943] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.136 [2024-04-19 04:13:51.470452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.136 qpair failed and we were unable to recover it. 00:22:37.136 [2024-04-19 04:13:51.479952] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.136 [2024-04-19 04:13:51.479991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.136 [2024-04-19 04:13:51.480005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.136 [2024-04-19 04:13:51.480012] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.136 [2024-04-19 04:13:51.480017] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.136 [2024-04-19 04:13:51.490418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.136 qpair failed and we were unable to recover it. 00:22:37.136 [2024-04-19 04:13:51.500090] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.136 [2024-04-19 04:13:51.500121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.136 [2024-04-19 04:13:51.500134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.136 [2024-04-19 04:13:51.500140] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.136 [2024-04-19 04:13:51.500145] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.136 [2024-04-19 04:13:51.510460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.136 qpair failed and we were unable to recover it. 00:22:37.136 [2024-04-19 04:13:51.520060] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.136 [2024-04-19 04:13:51.520092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.136 [2024-04-19 04:13:51.520106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.136 [2024-04-19 04:13:51.520112] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.136 [2024-04-19 04:13:51.520117] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.136 [2024-04-19 04:13:51.530667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.136 qpair failed and we were unable to recover it. 00:22:37.136 [2024-04-19 04:13:51.540151] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.136 [2024-04-19 04:13:51.540184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.136 [2024-04-19 04:13:51.540198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.136 [2024-04-19 04:13:51.540205] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.136 [2024-04-19 04:13:51.540210] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.136 [2024-04-19 04:13:51.550526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.136 qpair failed and we were unable to recover it. 00:22:37.136 [2024-04-19 04:13:51.560251] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.136 [2024-04-19 04:13:51.560289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.136 [2024-04-19 04:13:51.560303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.136 [2024-04-19 04:13:51.560309] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.136 [2024-04-19 04:13:51.560314] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.137 [2024-04-19 04:13:51.570628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.137 qpair failed and we were unable to recover it. 00:22:37.137 [2024-04-19 04:13:51.580209] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.137 [2024-04-19 04:13:51.580245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.137 [2024-04-19 04:13:51.580258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.137 [2024-04-19 04:13:51.580264] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.137 [2024-04-19 04:13:51.580269] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.137 [2024-04-19 04:13:51.590683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.137 qpair failed and we were unable to recover it. 00:22:37.137 [2024-04-19 04:13:51.600310] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.137 [2024-04-19 04:13:51.600346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.137 [2024-04-19 04:13:51.600359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.137 [2024-04-19 04:13:51.600365] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.137 [2024-04-19 04:13:51.600370] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.137 [2024-04-19 04:13:51.610912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.137 qpair failed and we were unable to recover it. 00:22:37.137 [2024-04-19 04:13:51.620382] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.137 [2024-04-19 04:13:51.620422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.137 [2024-04-19 04:13:51.620436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.137 [2024-04-19 04:13:51.620442] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.137 [2024-04-19 04:13:51.620447] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.137 [2024-04-19 04:13:51.630744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.137 qpair failed and we were unable to recover it. 00:22:37.137 [2024-04-19 04:13:51.640444] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.137 [2024-04-19 04:13:51.640482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.137 [2024-04-19 04:13:51.640495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.137 [2024-04-19 04:13:51.640504] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.137 [2024-04-19 04:13:51.640509] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.137 [2024-04-19 04:13:51.650867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.137 qpair failed and we were unable to recover it. 00:22:37.137 [2024-04-19 04:13:51.660488] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.137 [2024-04-19 04:13:51.660519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.137 [2024-04-19 04:13:51.660534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.137 [2024-04-19 04:13:51.660540] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.137 [2024-04-19 04:13:51.660545] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.396 [2024-04-19 04:13:51.670995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.396 qpair failed and we were unable to recover it. 00:22:37.396 [2024-04-19 04:13:51.680596] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.396 [2024-04-19 04:13:51.680627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.396 [2024-04-19 04:13:51.680641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.396 [2024-04-19 04:13:51.680648] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.396 [2024-04-19 04:13:51.680653] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.396 [2024-04-19 04:13:51.690966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.396 qpair failed and we were unable to recover it. 00:22:37.396 [2024-04-19 04:13:51.700570] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.396 [2024-04-19 04:13:51.700604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.396 [2024-04-19 04:13:51.700618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.396 [2024-04-19 04:13:51.700624] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.396 [2024-04-19 04:13:51.700630] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.396 [2024-04-19 04:13:51.711051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.396 qpair failed and we were unable to recover it. 00:22:37.396 [2024-04-19 04:13:51.720686] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.396 [2024-04-19 04:13:51.720720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.396 [2024-04-19 04:13:51.720734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.396 [2024-04-19 04:13:51.720740] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.396 [2024-04-19 04:13:51.720746] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.396 [2024-04-19 04:13:51.731181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.396 qpair failed and we were unable to recover it. 00:22:37.396 [2024-04-19 04:13:51.740721] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.397 [2024-04-19 04:13:51.740755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.397 [2024-04-19 04:13:51.740769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.397 [2024-04-19 04:13:51.740775] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.397 [2024-04-19 04:13:51.740780] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.397 [2024-04-19 04:13:51.751033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.397 qpair failed and we were unable to recover it. 00:22:37.397 [2024-04-19 04:13:51.760911] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.397 [2024-04-19 04:13:51.760940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.397 [2024-04-19 04:13:51.760953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.397 [2024-04-19 04:13:51.760959] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.397 [2024-04-19 04:13:51.760964] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.397 [2024-04-19 04:13:51.771174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.397 qpair failed and we were unable to recover it. 00:22:37.397 [2024-04-19 04:13:51.780867] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.397 [2024-04-19 04:13:51.780900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.397 [2024-04-19 04:13:51.780914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.397 [2024-04-19 04:13:51.780919] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.397 [2024-04-19 04:13:51.780924] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.397 [2024-04-19 04:13:51.791207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.397 qpair failed and we were unable to recover it. 00:22:37.397 [2024-04-19 04:13:51.800936] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.397 [2024-04-19 04:13:51.800976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.397 [2024-04-19 04:13:51.800991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.397 [2024-04-19 04:13:51.800997] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.397 [2024-04-19 04:13:51.801001] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.397 [2024-04-19 04:13:51.811447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.397 qpair failed and we were unable to recover it. 00:22:37.397 [2024-04-19 04:13:51.820857] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.397 [2024-04-19 04:13:51.820893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.397 [2024-04-19 04:13:51.820909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.397 [2024-04-19 04:13:51.820915] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.397 [2024-04-19 04:13:51.820921] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.397 [2024-04-19 04:13:51.831408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.397 qpair failed and we were unable to recover it. 00:22:37.397 [2024-04-19 04:13:51.840955] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.397 [2024-04-19 04:13:51.840990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.397 [2024-04-19 04:13:51.841004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.397 [2024-04-19 04:13:51.841010] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.397 [2024-04-19 04:13:51.841015] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.397 [2024-04-19 04:13:51.851406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.397 qpair failed and we were unable to recover it. 00:22:37.397 [2024-04-19 04:13:51.861014] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.397 [2024-04-19 04:13:51.861046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.397 [2024-04-19 04:13:51.861059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.397 [2024-04-19 04:13:51.861065] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.397 [2024-04-19 04:13:51.861071] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.397 [2024-04-19 04:13:51.871456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.397 qpair failed and we were unable to recover it. 00:22:37.397 [2024-04-19 04:13:51.881184] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.397 [2024-04-19 04:13:51.881216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.397 [2024-04-19 04:13:51.881231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.397 [2024-04-19 04:13:51.881237] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.397 [2024-04-19 04:13:51.881242] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.397 [2024-04-19 04:13:51.891590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.397 qpair failed and we were unable to recover it. 00:22:37.397 [2024-04-19 04:13:51.901156] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.397 [2024-04-19 04:13:51.901188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.397 [2024-04-19 04:13:51.901202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.397 [2024-04-19 04:13:51.901208] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.397 [2024-04-19 04:13:51.901216] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.397 [2024-04-19 04:13:51.911597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.397 qpair failed and we were unable to recover it. 00:22:37.397 [2024-04-19 04:13:51.921343] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.397 [2024-04-19 04:13:51.921376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.397 [2024-04-19 04:13:51.921394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.397 [2024-04-19 04:13:51.921406] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.397 [2024-04-19 04:13:51.921412] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.656 [2024-04-19 04:13:51.931594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.656 qpair failed and we were unable to recover it. 00:22:37.656 [2024-04-19 04:13:51.941313] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.656 [2024-04-19 04:13:51.941347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.656 [2024-04-19 04:13:51.941361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.656 [2024-04-19 04:13:51.941368] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.656 [2024-04-19 04:13:51.941373] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.656 [2024-04-19 04:13:51.951670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.656 qpair failed and we were unable to recover it. 00:22:37.656 [2024-04-19 04:13:51.961236] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.656 [2024-04-19 04:13:51.961274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.656 [2024-04-19 04:13:51.961288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.656 [2024-04-19 04:13:51.961295] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.656 [2024-04-19 04:13:51.961300] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.656 [2024-04-19 04:13:51.971874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.656 qpair failed and we were unable to recover it. 00:22:37.656 [2024-04-19 04:13:51.981347] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.656 [2024-04-19 04:13:51.981379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.656 [2024-04-19 04:13:51.981392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.656 [2024-04-19 04:13:51.981398] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.656 [2024-04-19 04:13:51.981412] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.656 [2024-04-19 04:13:51.991699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.656 qpair failed and we were unable to recover it. 00:22:37.656 [2024-04-19 04:13:52.001541] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.656 [2024-04-19 04:13:52.001575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.656 [2024-04-19 04:13:52.001589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.656 [2024-04-19 04:13:52.001595] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.656 [2024-04-19 04:13:52.001600] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.656 [2024-04-19 04:13:52.011788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.656 qpair failed and we were unable to recover it. 00:22:37.656 [2024-04-19 04:13:52.021497] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.656 [2024-04-19 04:13:52.021530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.656 [2024-04-19 04:13:52.021544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.657 [2024-04-19 04:13:52.021549] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.657 [2024-04-19 04:13:52.021555] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.657 [2024-04-19 04:13:52.031962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.657 qpair failed and we were unable to recover it. 00:22:37.657 [2024-04-19 04:13:52.041553] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.657 [2024-04-19 04:13:52.041589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.657 [2024-04-19 04:13:52.041602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.657 [2024-04-19 04:13:52.041608] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.657 [2024-04-19 04:13:52.041613] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.657 [2024-04-19 04:13:52.051977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.657 qpair failed and we were unable to recover it. 00:22:37.657 [2024-04-19 04:13:52.061704] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.657 [2024-04-19 04:13:52.061737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.657 [2024-04-19 04:13:52.061751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.657 [2024-04-19 04:13:52.061757] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.657 [2024-04-19 04:13:52.061762] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.657 [2024-04-19 04:13:52.072027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.657 qpair failed and we were unable to recover it. 00:22:37.657 [2024-04-19 04:13:52.081757] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.657 [2024-04-19 04:13:52.081789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.657 [2024-04-19 04:13:52.081802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.657 [2024-04-19 04:13:52.081811] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.657 [2024-04-19 04:13:52.081816] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.657 [2024-04-19 04:13:52.092076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.657 qpair failed and we were unable to recover it. 00:22:37.657 [2024-04-19 04:13:52.101686] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.657 [2024-04-19 04:13:52.101720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.657 [2024-04-19 04:13:52.101733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.657 [2024-04-19 04:13:52.101740] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.657 [2024-04-19 04:13:52.101745] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.657 [2024-04-19 04:13:52.112145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.657 qpair failed and we were unable to recover it. 00:22:37.657 [2024-04-19 04:13:52.121790] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.657 [2024-04-19 04:13:52.121826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.657 [2024-04-19 04:13:52.121839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.657 [2024-04-19 04:13:52.121846] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.657 [2024-04-19 04:13:52.121850] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.657 [2024-04-19 04:13:52.132281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.657 qpair failed and we were unable to recover it. 00:22:37.657 [2024-04-19 04:13:52.141893] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.657 [2024-04-19 04:13:52.141928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.657 [2024-04-19 04:13:52.141942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.657 [2024-04-19 04:13:52.141948] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.657 [2024-04-19 04:13:52.141953] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.657 [2024-04-19 04:13:52.152138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.657 qpair failed and we were unable to recover it. 00:22:37.657 [2024-04-19 04:13:52.161926] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.657 [2024-04-19 04:13:52.161961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.657 [2024-04-19 04:13:52.161975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.657 [2024-04-19 04:13:52.161981] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.657 [2024-04-19 04:13:52.161986] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.657 [2024-04-19 04:13:52.172267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.657 qpair failed and we were unable to recover it. 00:22:37.657 [2024-04-19 04:13:52.181953] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.657 [2024-04-19 04:13:52.181989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.657 [2024-04-19 04:13:52.182006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.657 [2024-04-19 04:13:52.182014] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.657 [2024-04-19 04:13:52.182020] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.916 [2024-04-19 04:13:52.192368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.916 qpair failed and we were unable to recover it. 00:22:37.916 [2024-04-19 04:13:52.202032] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.916 [2024-04-19 04:13:52.202066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.916 [2024-04-19 04:13:52.202080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.916 [2024-04-19 04:13:52.202087] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.916 [2024-04-19 04:13:52.202092] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.916 [2024-04-19 04:13:52.212524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.916 qpair failed and we were unable to recover it. 00:22:37.916 [2024-04-19 04:13:52.222052] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.916 [2024-04-19 04:13:52.222088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.916 [2024-04-19 04:13:52.222102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.916 [2024-04-19 04:13:52.222108] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.916 [2024-04-19 04:13:52.222113] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.916 [2024-04-19 04:13:52.232532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.916 qpair failed and we were unable to recover it. 00:22:37.916 [2024-04-19 04:13:52.242219] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.916 [2024-04-19 04:13:52.242255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.916 [2024-04-19 04:13:52.242269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.917 [2024-04-19 04:13:52.242275] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.917 [2024-04-19 04:13:52.242280] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.917 [2024-04-19 04:13:52.252537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.917 qpair failed and we were unable to recover it. 00:22:37.917 [2024-04-19 04:13:52.262155] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.917 [2024-04-19 04:13:52.262190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.917 [2024-04-19 04:13:52.262208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.917 [2024-04-19 04:13:52.262213] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.917 [2024-04-19 04:13:52.262219] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.917 [2024-04-19 04:13:52.272642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.917 qpair failed and we were unable to recover it. 00:22:37.917 [2024-04-19 04:13:52.282278] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.917 [2024-04-19 04:13:52.282312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.917 [2024-04-19 04:13:52.282326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.917 [2024-04-19 04:13:52.282331] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.917 [2024-04-19 04:13:52.282336] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.917 [2024-04-19 04:13:52.292644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.917 qpair failed and we were unable to recover it. 00:22:37.917 [2024-04-19 04:13:52.302300] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.917 [2024-04-19 04:13:52.302335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.917 [2024-04-19 04:13:52.302350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.917 [2024-04-19 04:13:52.302356] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.917 [2024-04-19 04:13:52.302361] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.917 [2024-04-19 04:13:52.312862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.917 qpair failed and we were unable to recover it. 00:22:37.917 [2024-04-19 04:13:52.322438] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.917 [2024-04-19 04:13:52.322471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.917 [2024-04-19 04:13:52.322485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.917 [2024-04-19 04:13:52.322491] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.917 [2024-04-19 04:13:52.322496] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.917 [2024-04-19 04:13:52.332732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.917 qpair failed and we were unable to recover it. 00:22:37.917 [2024-04-19 04:13:52.342457] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.917 [2024-04-19 04:13:52.342493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.917 [2024-04-19 04:13:52.342507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.917 [2024-04-19 04:13:52.342513] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.917 [2024-04-19 04:13:52.342521] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.917 [2024-04-19 04:13:52.352979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.917 qpair failed and we were unable to recover it. 00:22:37.917 [2024-04-19 04:13:52.362608] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.917 [2024-04-19 04:13:52.362648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.917 [2024-04-19 04:13:52.362661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.917 [2024-04-19 04:13:52.362667] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.917 [2024-04-19 04:13:52.362672] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.917 [2024-04-19 04:13:52.372813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.917 qpair failed and we were unable to recover it. 00:22:37.917 [2024-04-19 04:13:52.382614] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.917 [2024-04-19 04:13:52.382649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.917 [2024-04-19 04:13:52.382662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.917 [2024-04-19 04:13:52.382668] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.917 [2024-04-19 04:13:52.382673] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.917 [2024-04-19 04:13:52.393037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.917 qpair failed and we were unable to recover it. 00:22:37.917 [2024-04-19 04:13:52.402700] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.917 [2024-04-19 04:13:52.402730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.917 [2024-04-19 04:13:52.402743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.917 [2024-04-19 04:13:52.402749] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.917 [2024-04-19 04:13:52.402755] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.917 [2024-04-19 04:13:52.413069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.917 qpair failed and we were unable to recover it. 00:22:37.917 [2024-04-19 04:13:52.422777] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.917 [2024-04-19 04:13:52.422811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.917 [2024-04-19 04:13:52.422825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.917 [2024-04-19 04:13:52.422831] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.917 [2024-04-19 04:13:52.422836] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:37.917 [2024-04-19 04:13:52.433088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.917 qpair failed and we were unable to recover it. 00:22:37.917 [2024-04-19 04:13:52.442899] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:37.917 [2024-04-19 04:13:52.442938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:37.917 [2024-04-19 04:13:52.442957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:37.917 [2024-04-19 04:13:52.442963] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:37.917 [2024-04-19 04:13:52.442968] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.176 [2024-04-19 04:13:52.453209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.176 qpair failed and we were unable to recover it. 00:22:38.176 [2024-04-19 04:13:52.462865] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.176 [2024-04-19 04:13:52.462895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.176 [2024-04-19 04:13:52.462911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.176 [2024-04-19 04:13:52.462917] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.176 [2024-04-19 04:13:52.462922] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.176 [2024-04-19 04:13:52.473153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.176 qpair failed and we were unable to recover it. 00:22:38.176 [2024-04-19 04:13:52.482952] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.176 [2024-04-19 04:13:52.482990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.176 [2024-04-19 04:13:52.483005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.176 [2024-04-19 04:13:52.483011] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.176 [2024-04-19 04:13:52.483016] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.176 [2024-04-19 04:13:52.493219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.176 qpair failed and we were unable to recover it. 00:22:38.176 [2024-04-19 04:13:52.503039] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.176 [2024-04-19 04:13:52.503075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.176 [2024-04-19 04:13:52.503090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.176 [2024-04-19 04:13:52.503096] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.176 [2024-04-19 04:13:52.503101] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.176 [2024-04-19 04:13:52.513230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.176 qpair failed and we were unable to recover it. 00:22:38.176 [2024-04-19 04:13:52.523084] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.176 [2024-04-19 04:13:52.523119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.176 [2024-04-19 04:13:52.523132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.176 [2024-04-19 04:13:52.523141] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.176 [2024-04-19 04:13:52.523146] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.176 [2024-04-19 04:13:52.533367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.176 qpair failed and we were unable to recover it. 00:22:38.176 [2024-04-19 04:13:52.543014] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.176 [2024-04-19 04:13:52.543053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.176 [2024-04-19 04:13:52.543067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.176 [2024-04-19 04:13:52.543073] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.176 [2024-04-19 04:13:52.543078] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.176 [2024-04-19 04:13:52.553320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.176 qpair failed and we were unable to recover it. 00:22:38.176 [2024-04-19 04:13:52.563187] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.176 [2024-04-19 04:13:52.563221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.176 [2024-04-19 04:13:52.563234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.176 [2024-04-19 04:13:52.563240] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.176 [2024-04-19 04:13:52.563245] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.176 [2024-04-19 04:13:52.573554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.176 qpair failed and we were unable to recover it. 00:22:38.176 [2024-04-19 04:13:52.583151] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.176 [2024-04-19 04:13:52.583185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.176 [2024-04-19 04:13:52.583199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.176 [2024-04-19 04:13:52.583204] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.176 [2024-04-19 04:13:52.583209] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.176 [2024-04-19 04:13:52.593523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.176 qpair failed and we were unable to recover it. 00:22:38.177 [2024-04-19 04:13:52.603218] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.177 [2024-04-19 04:13:52.603252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.177 [2024-04-19 04:13:52.603266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.177 [2024-04-19 04:13:52.603272] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.177 [2024-04-19 04:13:52.603277] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.177 [2024-04-19 04:13:52.613559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.177 qpair failed and we were unable to recover it. 00:22:38.177 [2024-04-19 04:13:52.623352] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.177 [2024-04-19 04:13:52.623388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.177 [2024-04-19 04:13:52.623406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.177 [2024-04-19 04:13:52.623413] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.177 [2024-04-19 04:13:52.623419] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.177 [2024-04-19 04:13:52.633746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.177 qpair failed and we were unable to recover it. 00:22:38.177 [2024-04-19 04:13:52.643376] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.177 [2024-04-19 04:13:52.643411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.177 [2024-04-19 04:13:52.643425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.177 [2024-04-19 04:13:52.643431] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.177 [2024-04-19 04:13:52.643436] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.177 [2024-04-19 04:13:52.653708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.177 qpair failed and we were unable to recover it. 00:22:38.177 [2024-04-19 04:13:52.663383] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.177 [2024-04-19 04:13:52.663424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.177 [2024-04-19 04:13:52.663437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.177 [2024-04-19 04:13:52.663444] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.177 [2024-04-19 04:13:52.663449] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.177 [2024-04-19 04:13:52.673821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.177 qpair failed and we were unable to recover it. 00:22:38.177 [2024-04-19 04:13:52.683496] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.177 [2024-04-19 04:13:52.683529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.177 [2024-04-19 04:13:52.683542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.177 [2024-04-19 04:13:52.683549] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.177 [2024-04-19 04:13:52.683554] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.177 [2024-04-19 04:13:52.693682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.177 qpair failed and we were unable to recover it. 00:22:38.177 [2024-04-19 04:13:52.703573] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.177 [2024-04-19 04:13:52.703607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.177 [2024-04-19 04:13:52.703629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.177 [2024-04-19 04:13:52.703635] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.177 [2024-04-19 04:13:52.703641] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.436 [2024-04-19 04:13:52.713911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.723560] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.723591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.723606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.723613] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.723618] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.733857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.743651] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.743685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.743700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.743705] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.743711] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.754041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.763724] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.763761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.763774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.763780] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.763785] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.773935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.783751] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.783786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.783801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.783807] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.783815] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.794290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.803752] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.803790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.803804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.803810] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.803815] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.814226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.823798] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.823833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.823846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.823852] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.823857] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.834345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.843800] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.843838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.843851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.843856] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.843861] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.854286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.864016] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.864051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.864066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.864073] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.864079] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.874344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.883962] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.883998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.884012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.884018] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.884023] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.894392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.904080] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.904114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.904128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.904134] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.904140] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.914432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.924088] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.924126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.924140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.924147] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.924152] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.934498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.944188] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.944218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.944232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.944238] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.944243] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.437 [2024-04-19 04:13:52.954603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.437 qpair failed and we were unable to recover it. 00:22:38.437 [2024-04-19 04:13:52.964237] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.437 [2024-04-19 04:13:52.964276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.437 [2024-04-19 04:13:52.964294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.437 [2024-04-19 04:13:52.964303] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.437 [2024-04-19 04:13:52.964309] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.696 [2024-04-19 04:13:52.974440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.696 qpair failed and we were unable to recover it. 00:22:38.697 [2024-04-19 04:13:52.984268] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.697 [2024-04-19 04:13:52.984303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.697 [2024-04-19 04:13:52.984318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.697 [2024-04-19 04:13:52.984324] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.697 [2024-04-19 04:13:52.984329] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.697 [2024-04-19 04:13:52.994742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.697 qpair failed and we were unable to recover it. 00:22:38.697 [2024-04-19 04:13:53.004278] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.697 [2024-04-19 04:13:53.004312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.697 [2024-04-19 04:13:53.004327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.697 [2024-04-19 04:13:53.004333] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.697 [2024-04-19 04:13:53.004338] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.697 [2024-04-19 04:13:53.014724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.697 qpair failed and we were unable to recover it. 00:22:38.697 [2024-04-19 04:13:53.024395] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.697 [2024-04-19 04:13:53.024436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.697 [2024-04-19 04:13:53.024449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.697 [2024-04-19 04:13:53.024456] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.697 [2024-04-19 04:13:53.024460] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.697 [2024-04-19 04:13:53.034990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.697 qpair failed and we were unable to recover it. 00:22:38.697 [2024-04-19 04:13:53.044430] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.697 [2024-04-19 04:13:53.044467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.697 [2024-04-19 04:13:53.044480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.697 [2024-04-19 04:13:53.044487] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.697 [2024-04-19 04:13:53.044492] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.697 [2024-04-19 04:13:53.054729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.697 qpair failed and we were unable to recover it. 00:22:38.697 [2024-04-19 04:13:53.064528] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:38.697 [2024-04-19 04:13:53.064563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:38.697 [2024-04-19 04:13:53.064577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:38.697 [2024-04-19 04:13:53.064583] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:38.697 [2024-04-19 04:13:53.064588] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:38.697 [2024-04-19 04:13:53.075073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:38.697 qpair failed and we were unable to recover it. 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Write completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 Read completed with error (sct=0, sc=8) 00:22:39.631 starting I/O failed 00:22:39.631 [2024-04-19 04:13:54.080129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:39.631 [2024-04-19 04:13:54.087152] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.631 [2024-04-19 04:13:54.087190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.631 [2024-04-19 04:13:54.087205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.631 [2024-04-19 04:13:54.087212] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.631 [2024-04-19 04:13:54.087217] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d15c0 00:22:39.631 [2024-04-19 04:13:54.097848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:39.631 qpair failed and we were unable to recover it. 00:22:39.631 [2024-04-19 04:13:54.107659] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.631 [2024-04-19 04:13:54.107690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.631 [2024-04-19 04:13:54.107704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.631 [2024-04-19 04:13:54.107710] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.631 [2024-04-19 04:13:54.107716] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d15c0 00:22:39.631 [2024-04-19 04:13:54.117958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:39.631 qpair failed and we were unable to recover it. 00:22:39.631 [2024-04-19 04:13:54.127510] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.631 [2024-04-19 04:13:54.127547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.631 [2024-04-19 04:13:54.127565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.631 [2024-04-19 04:13:54.127572] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.631 [2024-04-19 04:13:54.127578] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:22:39.631 [2024-04-19 04:13:54.138003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:39.631 qpair failed and we were unable to recover it. 00:22:39.631 [2024-04-19 04:13:54.147614] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.631 [2024-04-19 04:13:54.147650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.631 [2024-04-19 04:13:54.147664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.631 [2024-04-19 04:13:54.147670] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.631 [2024-04-19 04:13:54.147675] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:22:39.631 [2024-04-19 04:13:54.157994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:39.631 qpair failed and we were unable to recover it. 00:22:39.631 [2024-04-19 04:13:54.158068] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:22:39.631 A controller has encountered a failure and is being reset. 00:22:39.890 [2024-04-19 04:13:54.167750] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.890 [2024-04-19 04:13:54.167786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.890 [2024-04-19 04:13:54.167808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.890 [2024-04-19 04:13:54.167817] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.890 [2024-04-19 04:13:54.167824] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:39.890 [2024-04-19 04:13:54.178114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:39.890 qpair failed and we were unable to recover it. 00:22:39.890 [2024-04-19 04:13:54.187704] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:39.890 [2024-04-19 04:13:54.187744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:39.890 [2024-04-19 04:13:54.187758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:39.890 [2024-04-19 04:13:54.187764] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:39.890 [2024-04-19 04:13:54.187770] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:39.890 [2024-04-19 04:13:54.198106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:39.890 qpair failed and we were unable to recover it. 00:22:39.890 [2024-04-19 04:13:54.198269] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:22:39.890 [2024-04-19 04:13:54.229510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:39.890 Controller properly reset. 00:22:39.890 Initializing NVMe Controllers 00:22:39.890 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:39.890 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:39.890 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:39.890 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:39.890 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:39.890 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:39.890 Initialization complete. Launching workers. 00:22:39.890 Starting thread on core 1 00:22:39.890 Starting thread on core 2 00:22:39.890 Starting thread on core 3 00:22:39.890 Starting thread on core 0 00:22:39.890 04:13:54 -- host/target_disconnect.sh@59 -- # sync 00:22:39.890 00:22:39.890 real 0m12.491s 00:22:39.890 user 0m27.873s 00:22:39.890 sys 0m2.180s 00:22:39.890 04:13:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:39.890 04:13:54 -- common/autotest_common.sh@10 -- # set +x 00:22:39.890 ************************************ 00:22:39.890 END TEST nvmf_target_disconnect_tc2 00:22:39.890 ************************************ 00:22:39.890 04:13:54 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:22:39.890 04:13:54 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:22:39.890 04:13:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:39.890 04:13:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:39.890 04:13:54 -- common/autotest_common.sh@10 -- # set +x 00:22:40.148 ************************************ 00:22:40.148 START TEST nvmf_target_disconnect_tc3 00:22:40.148 ************************************ 00:22:40.148 04:13:54 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc3 00:22:40.148 04:13:54 -- host/target_disconnect.sh@65 -- # reconnectpid=416134 00:22:40.148 04:13:54 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:22:40.148 04:13:54 -- host/target_disconnect.sh@67 -- # sleep 2 00:22:40.148 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.046 04:13:56 -- host/target_disconnect.sh@68 -- # kill -9 414709 00:22:42.046 04:13:56 -- host/target_disconnect.sh@70 -- # sleep 2 00:22:43.422 Write completed with error (sct=0, sc=8) 00:22:43.422 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Read completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 Write completed with error (sct=0, sc=8) 00:22:43.423 starting I/O failed 00:22:43.423 [2024-04-19 04:13:57.584297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:43.990 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 414709 Killed "${NVMF_APP[@]}" "$@" 00:22:43.990 04:13:58 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:22:43.990 04:13:58 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:43.990 04:13:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:43.990 04:13:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:43.990 04:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:43.990 04:13:58 -- nvmf/common.sh@470 -- # nvmfpid=416704 00:22:43.990 04:13:58 -- nvmf/common.sh@471 -- # waitforlisten 416704 00:22:43.990 04:13:58 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:43.990 04:13:58 -- common/autotest_common.sh@817 -- # '[' -z 416704 ']' 00:22:43.990 04:13:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.990 04:13:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:43.990 04:13:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.990 04:13:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:43.990 04:13:58 -- common/autotest_common.sh@10 -- # set +x 00:22:43.990 [2024-04-19 04:13:58.495841] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:43.990 [2024-04-19 04:13:58.495887] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.990 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.248 [2024-04-19 04:13:58.561719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.248 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Read completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 Write completed with error (sct=0, sc=8) 00:22:44.249 starting I/O failed 00:22:44.249 [2024-04-19 04:13:58.589148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.249 [2024-04-19 04:13:58.629728] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.249 [2024-04-19 04:13:58.629767] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.249 [2024-04-19 04:13:58.629774] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.249 [2024-04-19 04:13:58.629779] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.249 [2024-04-19 04:13:58.629783] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.249 [2024-04-19 04:13:58.629910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:44.249 [2024-04-19 04:13:58.630020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:44.249 [2024-04-19 04:13:58.630126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:44.249 [2024-04-19 04:13:58.630127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:44.816 04:13:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:44.816 04:13:59 -- common/autotest_common.sh@850 -- # return 0 00:22:44.816 04:13:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:44.816 04:13:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:44.816 04:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:44.816 04:13:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.816 04:13:59 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:44.816 04:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.816 04:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:44.816 Malloc0 00:22:44.816 04:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.816 04:13:59 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:44.817 04:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.817 04:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:45.075 [2024-04-19 04:13:59.352388] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16df770/0x16eb380) succeed. 00:22:45.075 [2024-04-19 04:13:59.361846] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16e0d60/0x176b400) succeed. 00:22:45.075 04:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.075 04:13:59 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.075 04:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.075 04:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:45.075 04:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.075 04:13:59 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.075 04:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.075 04:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:45.075 04:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.075 04:13:59 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:22:45.075 04:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.075 04:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:45.075 [2024-04-19 04:13:59.492809] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:22:45.075 04:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.075 04:13:59 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:22:45.075 04:13:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.075 04:13:59 -- common/autotest_common.sh@10 -- # set +x 00:22:45.075 04:13:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.075 04:13:59 -- host/target_disconnect.sh@73 -- # wait 416134 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Write completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 Read completed with error (sct=0, sc=8) 00:22:45.075 starting I/O failed 00:22:45.075 [2024-04-19 04:13:59.594033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:45.075 [2024-04-19 04:13:59.595379] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:45.075 [2024-04-19 04:13:59.595395] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:45.075 [2024-04-19 04:13:59.595406] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:46.449 [2024-04-19 04:14:00.599213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:46.449 qpair failed and we were unable to recover it. 00:22:46.449 [2024-04-19 04:14:00.600533] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:46.449 [2024-04-19 04:14:00.600549] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:46.449 [2024-04-19 04:14:00.600555] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:47.383 [2024-04-19 04:14:01.604264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:47.383 qpair failed and we were unable to recover it. 00:22:47.383 [2024-04-19 04:14:01.605696] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:47.383 [2024-04-19 04:14:01.605715] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:47.383 [2024-04-19 04:14:01.605721] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:48.316 [2024-04-19 04:14:02.609430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:48.316 qpair failed and we were unable to recover it. 00:22:48.316 [2024-04-19 04:14:02.610740] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:48.316 [2024-04-19 04:14:02.610754] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:48.316 [2024-04-19 04:14:02.610760] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:49.249 [2024-04-19 04:14:03.614587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:49.249 qpair failed and we were unable to recover it. 00:22:49.249 [2024-04-19 04:14:03.615839] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:49.249 [2024-04-19 04:14:03.615853] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:49.249 [2024-04-19 04:14:03.615859] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:50.182 [2024-04-19 04:14:04.619470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:50.182 qpair failed and we were unable to recover it. 00:22:50.182 [2024-04-19 04:14:04.620720] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:50.182 [2024-04-19 04:14:04.620734] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:50.182 [2024-04-19 04:14:04.620739] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:51.115 [2024-04-19 04:14:05.624275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:51.115 qpair failed and we were unable to recover it. 00:22:51.115 [2024-04-19 04:14:05.625685] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:51.115 [2024-04-19 04:14:05.625699] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:51.115 [2024-04-19 04:14:05.625705] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:22:52.488 [2024-04-19 04:14:06.629357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:52.488 qpair failed and we were unable to recover it. 00:22:52.488 [2024-04-19 04:14:06.630778] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:52.488 [2024-04-19 04:14:06.630798] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:52.489 [2024-04-19 04:14:06.630804] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:22:53.422 [2024-04-19 04:14:07.634584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:53.422 qpair failed and we were unable to recover it. 00:22:53.422 [2024-04-19 04:14:07.635855] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:53.422 [2024-04-19 04:14:07.635868] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:53.422 [2024-04-19 04:14:07.635874] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:22:54.356 [2024-04-19 04:14:08.639590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:54.357 qpair failed and we were unable to recover it. 00:22:54.357 [2024-04-19 04:14:08.639709] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:22:54.357 A controller has encountered a failure and is being reset. 00:22:54.357 Resorting to new failover address 192.168.100.9 00:22:54.357 [2024-04-19 04:14:08.641271] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:54.357 [2024-04-19 04:14:08.641294] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:54.357 [2024-04-19 04:14:08.641302] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:55.290 [2024-04-19 04:14:09.645133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:55.290 qpair failed and we were unable to recover it. 00:22:55.290 [2024-04-19 04:14:09.646416] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:55.290 [2024-04-19 04:14:09.646429] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:55.290 [2024-04-19 04:14:09.646435] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:22:56.223 [2024-04-19 04:14:10.650249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:56.223 qpair failed and we were unable to recover it. 00:22:56.223 [2024-04-19 04:14:10.650368] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:56.223 [2024-04-19 04:14:10.650510] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:22:56.223 [2024-04-19 04:14:10.652300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:56.223 Controller properly reset. 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Read completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.156 Write completed with error (sct=0, sc=8) 00:22:57.156 starting I/O failed 00:22:57.157 Write completed with error (sct=0, sc=8) 00:22:57.157 starting I/O failed 00:22:57.157 Read completed with error (sct=0, sc=8) 00:22:57.157 starting I/O failed 00:22:57.157 Read completed with error (sct=0, sc=8) 00:22:57.157 starting I/O failed 00:22:57.157 Write completed with error (sct=0, sc=8) 00:22:57.157 starting I/O failed 00:22:57.157 Read completed with error (sct=0, sc=8) 00:22:57.157 starting I/O failed 00:22:57.414 [2024-04-19 04:14:11.698473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.414 Initializing NVMe Controllers 00:22:57.414 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.414 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.414 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:57.414 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:57.414 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:57.414 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:57.414 Initialization complete. Launching workers. 00:22:57.414 Starting thread on core 1 00:22:57.414 Starting thread on core 2 00:22:57.414 Starting thread on core 3 00:22:57.414 Starting thread on core 0 00:22:57.414 04:14:11 -- host/target_disconnect.sh@74 -- # sync 00:22:57.414 00:22:57.414 real 0m17.305s 00:22:57.414 user 1m1.447s 00:22:57.414 sys 0m3.906s 00:22:57.414 04:14:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:57.414 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.414 ************************************ 00:22:57.414 END TEST nvmf_target_disconnect_tc3 00:22:57.414 ************************************ 00:22:57.414 04:14:11 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:57.414 04:14:11 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:22:57.414 04:14:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:57.414 04:14:11 -- nvmf/common.sh@117 -- # sync 00:22:57.414 04:14:11 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:57.414 04:14:11 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:57.414 04:14:11 -- nvmf/common.sh@120 -- # set +e 00:22:57.414 04:14:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.414 04:14:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:57.414 rmmod nvme_rdma 00:22:57.414 rmmod nvme_fabrics 00:22:57.414 04:14:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.414 04:14:11 -- nvmf/common.sh@124 -- # set -e 00:22:57.414 04:14:11 -- nvmf/common.sh@125 -- # return 0 00:22:57.414 04:14:11 -- nvmf/common.sh@478 -- # '[' -n 416704 ']' 00:22:57.414 04:14:11 -- nvmf/common.sh@479 -- # killprocess 416704 00:22:57.414 04:14:11 -- common/autotest_common.sh@936 -- # '[' -z 416704 ']' 00:22:57.414 04:14:11 -- common/autotest_common.sh@940 -- # kill -0 416704 00:22:57.414 04:14:11 -- common/autotest_common.sh@941 -- # uname 00:22:57.414 04:14:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:57.414 04:14:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 416704 00:22:57.414 04:14:11 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:22:57.414 04:14:11 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:22:57.414 04:14:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 416704' 00:22:57.414 killing process with pid 416704 00:22:57.414 04:14:11 -- common/autotest_common.sh@955 -- # kill 416704 00:22:57.414 04:14:11 -- common/autotest_common.sh@960 -- # wait 416704 00:22:57.673 04:14:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:57.673 04:14:12 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:22:57.673 00:22:57.673 real 0m37.515s 00:22:57.673 user 2m25.639s 00:22:57.673 sys 0m11.023s 00:22:57.673 04:14:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:57.673 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:57.673 ************************************ 00:22:57.673 END TEST nvmf_target_disconnect 00:22:57.673 ************************************ 00:22:57.673 04:14:12 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:22:57.673 04:14:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:57.673 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:57.933 04:14:12 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:22:57.933 00:22:57.933 real 15m40.112s 00:22:57.933 user 41m23.383s 00:22:57.933 sys 4m6.116s 00:22:57.933 04:14:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:57.933 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:57.933 ************************************ 00:22:57.933 END TEST nvmf_rdma 00:22:57.933 ************************************ 00:22:57.933 04:14:12 -- spdk/autotest.sh@283 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:22:57.933 04:14:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:57.933 04:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:57.933 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:57.933 ************************************ 00:22:57.933 START TEST spdkcli_nvmf_rdma 00:22:57.933 ************************************ 00:22:57.933 04:14:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:22:58.192 * Looking for test storage... 00:22:58.192 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:22:58.192 04:14:12 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:22:58.192 04:14:12 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:22:58.192 04:14:12 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:22:58.192 04:14:12 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.192 04:14:12 -- nvmf/common.sh@7 -- # uname -s 00:22:58.192 04:14:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.192 04:14:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.192 04:14:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.192 04:14:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.192 04:14:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.192 04:14:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.192 04:14:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.192 04:14:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.192 04:14:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.192 04:14:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.192 04:14:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:22:58.192 04:14:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:22:58.192 04:14:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.192 04:14:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.192 04:14:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:58.192 04:14:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.192 04:14:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:58.192 04:14:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.192 04:14:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.192 04:14:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.192 04:14:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.193 04:14:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.193 04:14:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.193 04:14:12 -- paths/export.sh@5 -- # export PATH 00:22:58.193 04:14:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.193 04:14:12 -- nvmf/common.sh@47 -- # : 0 00:22:58.193 04:14:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.193 04:14:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.193 04:14:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.193 04:14:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.193 04:14:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.193 04:14:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.193 04:14:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.193 04:14:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.193 04:14:12 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:58.193 04:14:12 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:58.193 04:14:12 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:58.193 04:14:12 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:58.193 04:14:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:58.193 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.193 04:14:12 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:58.193 04:14:12 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=419411 00:22:58.193 04:14:12 -- spdkcli/common.sh@34 -- # waitforlisten 419411 00:22:58.193 04:14:12 -- common/autotest_common.sh@817 -- # '[' -z 419411 ']' 00:22:58.193 04:14:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.193 04:14:12 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:58.193 04:14:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:58.193 04:14:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.193 04:14:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:58.193 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.193 [2024-04-19 04:14:12.560537] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:58.193 [2024-04-19 04:14:12.560579] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419411 ] 00:22:58.193 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.193 [2024-04-19 04:14:12.609867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:58.193 [2024-04-19 04:14:12.677298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.193 [2024-04-19 04:14:12.677300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.129 04:14:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:59.129 04:14:13 -- common/autotest_common.sh@850 -- # return 0 00:22:59.129 04:14:13 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:59.129 04:14:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:59.129 04:14:13 -- common/autotest_common.sh@10 -- # set +x 00:22:59.129 04:14:13 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:59.129 04:14:13 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:22:59.129 04:14:13 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:22:59.129 04:14:13 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:22:59.129 04:14:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.129 04:14:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:59.129 04:14:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:59.129 04:14:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:59.129 04:14:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.129 04:14:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:59.129 04:14:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.129 04:14:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:59.129 04:14:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:59.129 04:14:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.129 04:14:13 -- common/autotest_common.sh@10 -- # set +x 00:23:04.405 04:14:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:04.405 04:14:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:04.405 04:14:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:04.405 04:14:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:04.405 04:14:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:04.405 04:14:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:04.405 04:14:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:04.405 04:14:18 -- nvmf/common.sh@295 -- # net_devs=() 00:23:04.405 04:14:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:04.405 04:14:18 -- nvmf/common.sh@296 -- # e810=() 00:23:04.405 04:14:18 -- nvmf/common.sh@296 -- # local -ga e810 00:23:04.405 04:14:18 -- nvmf/common.sh@297 -- # x722=() 00:23:04.405 04:14:18 -- nvmf/common.sh@297 -- # local -ga x722 00:23:04.405 04:14:18 -- nvmf/common.sh@298 -- # mlx=() 00:23:04.405 04:14:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:04.405 04:14:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.405 04:14:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.405 04:14:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.405 04:14:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.405 04:14:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.405 04:14:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.405 04:14:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.405 04:14:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.405 04:14:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.405 04:14:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.405 04:14:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.405 04:14:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:04.405 04:14:18 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:04.405 04:14:18 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:04.405 04:14:18 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:04.405 04:14:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:04.405 04:14:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.405 04:14:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:04.405 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:04.405 04:14:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:04.405 04:14:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.405 04:14:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:04.405 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:04.405 04:14:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:04.405 04:14:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:04.405 04:14:18 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:04.405 04:14:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.405 04:14:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.405 04:14:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:04.405 04:14:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.405 04:14:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:04.405 Found net devices under 0000:18:00.0: mlx_0_0 00:23:04.405 04:14:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.405 04:14:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.405 04:14:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.405 04:14:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:04.405 04:14:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.405 04:14:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:04.405 Found net devices under 0000:18:00.1: mlx_0_1 00:23:04.406 04:14:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.406 04:14:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:04.406 04:14:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:04.406 04:14:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@409 -- # rdma_device_init 00:23:04.406 04:14:18 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:23:04.406 04:14:18 -- nvmf/common.sh@58 -- # uname 00:23:04.406 04:14:18 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:04.406 04:14:18 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:04.406 04:14:18 -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:04.406 04:14:18 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:04.406 04:14:18 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:04.406 04:14:18 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:04.406 04:14:18 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:04.406 04:14:18 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:04.406 04:14:18 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:23:04.406 04:14:18 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:04.406 04:14:18 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:04.406 04:14:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:04.406 04:14:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:04.406 04:14:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:04.406 04:14:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:04.406 04:14:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:04.406 04:14:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:04.406 04:14:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:04.406 04:14:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:04.406 04:14:18 -- nvmf/common.sh@105 -- # continue 2 00:23:04.406 04:14:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:04.406 04:14:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:04.406 04:14:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:04.406 04:14:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:04.406 04:14:18 -- nvmf/common.sh@105 -- # continue 2 00:23:04.406 04:14:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:04.406 04:14:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:04.406 04:14:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:04.406 04:14:18 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:04.406 04:14:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:04.406 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:04.406 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:23:04.406 altname enp24s0f0np0 00:23:04.406 altname ens785f0np0 00:23:04.406 inet 192.168.100.8/24 scope global mlx_0_0 00:23:04.406 valid_lft forever preferred_lft forever 00:23:04.406 04:14:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:04.406 04:14:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:04.406 04:14:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:04.406 04:14:18 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:04.406 04:14:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:04.406 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:04.406 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:23:04.406 altname enp24s0f1np1 00:23:04.406 altname ens785f1np1 00:23:04.406 inet 192.168.100.9/24 scope global mlx_0_1 00:23:04.406 valid_lft forever preferred_lft forever 00:23:04.406 04:14:18 -- nvmf/common.sh@411 -- # return 0 00:23:04.406 04:14:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:04.406 04:14:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:04.406 04:14:18 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:23:04.406 04:14:18 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:04.406 04:14:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:04.406 04:14:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:04.406 04:14:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:04.406 04:14:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:04.406 04:14:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:04.406 04:14:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:04.406 04:14:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:04.406 04:14:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:04.406 04:14:18 -- nvmf/common.sh@105 -- # continue 2 00:23:04.406 04:14:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:04.406 04:14:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:04.406 04:14:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:04.406 04:14:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:04.406 04:14:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:04.406 04:14:18 -- nvmf/common.sh@105 -- # continue 2 00:23:04.406 04:14:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:04.406 04:14:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:04.406 04:14:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:04.406 04:14:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:04.406 04:14:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:04.406 04:14:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:04.406 04:14:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:04.406 04:14:18 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:23:04.406 192.168.100.9' 00:23:04.406 04:14:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:04.406 192.168.100.9' 00:23:04.406 04:14:18 -- nvmf/common.sh@446 -- # head -n 1 00:23:04.406 04:14:18 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:04.406 04:14:18 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:23:04.406 192.168.100.9' 00:23:04.406 04:14:18 -- nvmf/common.sh@447 -- # tail -n +2 00:23:04.406 04:14:18 -- nvmf/common.sh@447 -- # head -n 1 00:23:04.406 04:14:18 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:04.406 04:14:18 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:23:04.406 04:14:18 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:04.406 04:14:18 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:23:04.406 04:14:18 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:23:04.406 04:14:18 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:23:04.406 04:14:18 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:23:04.406 04:14:18 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:23:04.406 04:14:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:04.406 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.406 04:14:18 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:04.406 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:04.406 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:23:04.406 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:23:04.406 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:23:04.406 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:23:04.406 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:23:04.406 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:23:04.406 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:23:04.406 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:23:04.406 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:23:04.406 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:23:04.406 ' 00:23:04.976 [2024-04-19 04:14:19.301933] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:06.884 [2024-04-19 04:14:21.337634] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2119310/0x1fa3040) succeed. 00:23:06.884 [2024-04-19 04:14:21.348538] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2119800/0x208e100) succeed. 00:23:08.264 [2024-04-19 04:14:22.570691] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:23:10.813 [2024-04-19 04:14:24.717100] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:23:12.195 [2024-04-19 04:14:26.571033] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:23:13.575 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:23:13.575 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:23:13.575 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:23:13.575 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:23:13.575 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:23:13.575 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:23:13.575 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:23:13.575 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:23:13.575 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:23:13.575 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:23:13.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:23:13.575 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:23:13.834 04:14:28 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:23:13.834 04:14:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:13.834 04:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:13.834 04:14:28 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:23:13.834 04:14:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:13.834 04:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:13.834 04:14:28 -- spdkcli/nvmf.sh@69 -- # check_match 00:23:13.834 04:14:28 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:23:14.094 04:14:28 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:23:14.094 04:14:28 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:23:14.094 04:14:28 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:23:14.094 04:14:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:14.094 04:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:14.094 04:14:28 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:23:14.094 04:14:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:14.094 04:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:14.094 04:14:28 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:23:14.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:23:14.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:14.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:23:14.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:23:14.094 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:23:14.094 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:23:14.094 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:23:14.094 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:23:14.094 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:23:14.094 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:23:14.094 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:23:14.094 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:23:14.094 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:23:14.094 ' 00:23:19.370 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:23:19.370 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:23:19.370 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:19.370 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:23:19.370 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:23:19.370 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:23:19.370 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:23:19.370 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:19.370 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:23:19.370 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:23:19.370 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:23:19.370 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:23:19.370 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:23:19.370 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:23:19.370 04:14:33 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:23:19.370 04:14:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:19.370 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:19.370 04:14:33 -- spdkcli/nvmf.sh@90 -- # killprocess 419411 00:23:19.370 04:14:33 -- common/autotest_common.sh@936 -- # '[' -z 419411 ']' 00:23:19.370 04:14:33 -- common/autotest_common.sh@940 -- # kill -0 419411 00:23:19.370 04:14:33 -- common/autotest_common.sh@941 -- # uname 00:23:19.370 04:14:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:19.370 04:14:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 419411 00:23:19.370 04:14:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:19.370 04:14:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:19.370 04:14:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 419411' 00:23:19.370 killing process with pid 419411 00:23:19.370 04:14:33 -- common/autotest_common.sh@955 -- # kill 419411 00:23:19.370 [2024-04-19 04:14:33.549575] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:19.370 04:14:33 -- common/autotest_common.sh@960 -- # wait 419411 00:23:19.370 04:14:33 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:23:19.370 04:14:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:19.370 04:14:33 -- nvmf/common.sh@117 -- # sync 00:23:19.370 04:14:33 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:19.370 04:14:33 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:19.370 04:14:33 -- nvmf/common.sh@120 -- # set +e 00:23:19.370 04:14:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.370 04:14:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:19.370 rmmod nvme_rdma 00:23:19.370 rmmod nvme_fabrics 00:23:19.370 04:14:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.370 04:14:33 -- nvmf/common.sh@124 -- # set -e 00:23:19.370 04:14:33 -- nvmf/common.sh@125 -- # return 0 00:23:19.370 04:14:33 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:23:19.370 04:14:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:19.370 04:14:33 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:23:19.370 00:23:19.370 real 0m21.435s 00:23:19.370 user 0m45.227s 00:23:19.370 sys 0m4.926s 00:23:19.371 04:14:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:19.371 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:19.371 ************************************ 00:23:19.371 END TEST spdkcli_nvmf_rdma 00:23:19.371 ************************************ 00:23:19.371 04:14:33 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:23:19.371 04:14:33 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:23:19.371 04:14:33 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:23:19.371 04:14:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:19.371 04:14:33 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:23:19.371 04:14:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:19.371 04:14:33 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:23:19.371 04:14:33 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:23:19.371 04:14:33 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:23:19.371 04:14:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:19.371 04:14:33 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:23:19.371 04:14:33 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:23:19.371 04:14:33 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:23:19.371 04:14:33 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:23:19.371 04:14:33 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:23:19.371 04:14:33 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:23:19.371 04:14:33 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:23:19.371 04:14:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:19.371 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:19.371 04:14:33 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:23:19.371 04:14:33 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:23:19.371 04:14:33 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:23:19.371 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:24.653 INFO: APP EXITING 00:23:24.653 INFO: killing all VMs 00:23:24.653 INFO: killing vhost app 00:23:24.653 INFO: EXIT DONE 00:23:26.560 Waiting for block devices as requested 00:23:26.560 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:26.560 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:26.820 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:26.820 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:26.820 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:26.820 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:27.079 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:27.079 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:27.079 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:27.338 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:27.338 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:27.338 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:27.338 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:27.598 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:27.598 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:27.598 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:27.857 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:23:32.054 Cleaning 00:23:32.054 Removing: /var/run/dpdk/spdk0/config 00:23:32.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:32.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:32.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:32.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:32.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:23:32.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:23:32.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:23:32.054 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:23:32.054 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:32.054 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:32.054 Removing: /var/run/dpdk/spdk1/config 00:23:32.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:32.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:32.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:32.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:32.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:23:32.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:23:32.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:23:32.054 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:23:32.054 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:32.054 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:32.054 Removing: /var/run/dpdk/spdk1/mp_socket 00:23:32.054 Removing: /var/run/dpdk/spdk2/config 00:23:32.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:32.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:32.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:32.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:32.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:23:32.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:23:32.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:23:32.054 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:23:32.054 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:32.054 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:32.054 Removing: /var/run/dpdk/spdk3/config 00:23:32.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:32.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:32.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:32.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:32.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:23:32.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:23:32.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:23:32.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:23:32.054 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:32.054 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:32.054 Removing: /var/run/dpdk/spdk4/config 00:23:32.054 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:32.054 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:32.054 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:32.055 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:32.055 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:23:32.055 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:23:32.055 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:23:32.055 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:23:32.055 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:32.055 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:32.055 Removing: /dev/shm/bdevperf_trace.pid238521 00:23:32.055 Removing: /dev/shm/bdevperf_trace.pid340306 00:23:32.055 Removing: /dev/shm/bdev_svc_trace.1 00:23:32.055 Removing: /dev/shm/nvmf_trace.0 00:23:32.055 Removing: /dev/shm/spdk_tgt_trace.pid122545 00:23:32.055 Removing: /var/run/dpdk/spdk0 00:23:32.055 Removing: /var/run/dpdk/spdk1 00:23:32.055 Removing: /var/run/dpdk/spdk2 00:23:32.055 Removing: /var/run/dpdk/spdk3 00:23:32.055 Removing: /var/run/dpdk/spdk4 00:23:32.055 Removing: /var/run/dpdk/spdk_pid119130 00:23:32.055 Removing: /var/run/dpdk/spdk_pid120789 00:23:32.055 Removing: /var/run/dpdk/spdk_pid122545 00:23:32.055 Removing: /var/run/dpdk/spdk_pid123393 00:23:32.055 Removing: /var/run/dpdk/spdk_pid124372 00:23:32.055 Removing: /var/run/dpdk/spdk_pid124628 00:23:32.055 Removing: /var/run/dpdk/spdk_pid125745 00:23:32.055 Removing: /var/run/dpdk/spdk_pid126010 00:23:32.055 Removing: /var/run/dpdk/spdk_pid126394 00:23:32.055 Removing: /var/run/dpdk/spdk_pid131562 00:23:32.055 Removing: /var/run/dpdk/spdk_pid133563 00:23:32.055 Removing: /var/run/dpdk/spdk_pid133965 00:23:32.055 Removing: /var/run/dpdk/spdk_pid134689 00:23:32.055 Removing: /var/run/dpdk/spdk_pid135135 00:23:32.055 Removing: /var/run/dpdk/spdk_pid135470 00:23:32.055 Removing: /var/run/dpdk/spdk_pid135765 00:23:32.055 Removing: /var/run/dpdk/spdk_pid136052 00:23:32.055 Removing: /var/run/dpdk/spdk_pid136369 00:23:32.055 Removing: /var/run/dpdk/spdk_pid137483 00:23:32.055 Removing: /var/run/dpdk/spdk_pid140577 00:23:32.055 Removing: /var/run/dpdk/spdk_pid140909 00:23:32.055 Removing: /var/run/dpdk/spdk_pid141206 00:23:32.055 Removing: /var/run/dpdk/spdk_pid141410 00:23:32.055 Removing: /var/run/dpdk/spdk_pid141791 00:23:32.055 Removing: /var/run/dpdk/spdk_pid142042 00:23:32.055 Removing: /var/run/dpdk/spdk_pid142407 00:23:32.055 Removing: /var/run/dpdk/spdk_pid142618 00:23:32.055 Removing: /var/run/dpdk/spdk_pid142922 00:23:32.055 Removing: /var/run/dpdk/spdk_pid143162 00:23:32.055 Removing: /var/run/dpdk/spdk_pid143312 00:23:32.055 Removing: /var/run/dpdk/spdk_pid143498 00:23:32.055 Removing: /var/run/dpdk/spdk_pid144129 00:23:32.055 Removing: /var/run/dpdk/spdk_pid144413 00:23:32.055 Removing: /var/run/dpdk/spdk_pid144745 00:23:32.055 Removing: /var/run/dpdk/spdk_pid145056 00:23:32.055 Removing: /var/run/dpdk/spdk_pid145093 00:23:32.055 Removing: /var/run/dpdk/spdk_pid145428 00:23:32.055 Removing: /var/run/dpdk/spdk_pid145714 00:23:32.055 Removing: /var/run/dpdk/spdk_pid146011 00:23:32.055 Removing: /var/run/dpdk/spdk_pid146298 00:23:32.055 Removing: /var/run/dpdk/spdk_pid146592 00:23:32.055 Removing: /var/run/dpdk/spdk_pid146875 00:23:32.055 Removing: /var/run/dpdk/spdk_pid147168 00:23:32.055 Removing: /var/run/dpdk/spdk_pid147452 00:23:32.055 Removing: /var/run/dpdk/spdk_pid147738 00:23:32.055 Removing: /var/run/dpdk/spdk_pid148032 00:23:32.055 Removing: /var/run/dpdk/spdk_pid148321 00:23:32.055 Removing: /var/run/dpdk/spdk_pid148610 00:23:32.055 Removing: /var/run/dpdk/spdk_pid148899 00:23:32.055 Removing: /var/run/dpdk/spdk_pid149189 00:23:32.055 Removing: /var/run/dpdk/spdk_pid149478 00:23:32.055 Removing: /var/run/dpdk/spdk_pid149768 00:23:32.055 Removing: /var/run/dpdk/spdk_pid150053 00:23:32.055 Removing: /var/run/dpdk/spdk_pid150348 00:23:32.055 Removing: /var/run/dpdk/spdk_pid150644 00:23:32.055 Removing: /var/run/dpdk/spdk_pid150954 00:23:32.055 Removing: /var/run/dpdk/spdk_pid151283 00:23:32.055 Removing: /var/run/dpdk/spdk_pid151542 00:23:32.055 Removing: /var/run/dpdk/spdk_pid151891 00:23:32.055 Removing: /var/run/dpdk/spdk_pid155876 00:23:32.055 Removing: /var/run/dpdk/spdk_pid200187 00:23:32.055 Removing: /var/run/dpdk/spdk_pid204259 00:23:32.055 Removing: /var/run/dpdk/spdk_pid213828 00:23:32.055 Removing: /var/run/dpdk/spdk_pid219104 00:23:32.055 Removing: /var/run/dpdk/spdk_pid222639 00:23:32.055 Removing: /var/run/dpdk/spdk_pid223511 00:23:32.055 Removing: /var/run/dpdk/spdk_pid238521 00:23:32.055 Removing: /var/run/dpdk/spdk_pid238863 00:23:32.055 Removing: /var/run/dpdk/spdk_pid243527 00:23:32.055 Removing: /var/run/dpdk/spdk_pid249664 00:23:32.055 Removing: /var/run/dpdk/spdk_pid252415 00:23:32.055 Removing: /var/run/dpdk/spdk_pid262483 00:23:32.055 Removing: /var/run/dpdk/spdk_pid287183 00:23:32.055 Removing: /var/run/dpdk/spdk_pid290788 00:23:32.055 Removing: /var/run/dpdk/spdk_pid308480 00:23:32.055 Removing: /var/run/dpdk/spdk_pid338072 00:23:32.055 Removing: /var/run/dpdk/spdk_pid339269 00:23:32.055 Removing: /var/run/dpdk/spdk_pid340306 00:23:32.055 Removing: /var/run/dpdk/spdk_pid344443 00:23:32.055 Removing: /var/run/dpdk/spdk_pid351514 00:23:32.055 Removing: /var/run/dpdk/spdk_pid352543 00:23:32.055 Removing: /var/run/dpdk/spdk_pid353587 00:23:32.055 Removing: /var/run/dpdk/spdk_pid354633 00:23:32.055 Removing: /var/run/dpdk/spdk_pid354904 00:23:32.055 Removing: /var/run/dpdk/spdk_pid359485 00:23:32.055 Removing: /var/run/dpdk/spdk_pid359493 00:23:32.055 Removing: /var/run/dpdk/spdk_pid364090 00:23:32.055 Removing: /var/run/dpdk/spdk_pid364618 00:23:32.055 Removing: /var/run/dpdk/spdk_pid365152 00:23:32.055 Removing: /var/run/dpdk/spdk_pid365934 00:23:32.055 Removing: /var/run/dpdk/spdk_pid365958 00:23:32.055 Removing: /var/run/dpdk/spdk_pid371049 00:23:32.315 Removing: /var/run/dpdk/spdk_pid371703 00:23:32.315 Removing: /var/run/dpdk/spdk_pid375890 00:23:32.315 Removing: /var/run/dpdk/spdk_pid378884 00:23:32.315 Removing: /var/run/dpdk/spdk_pid386465 00:23:32.315 Removing: /var/run/dpdk/spdk_pid386492 00:23:32.315 Removing: /var/run/dpdk/spdk_pid407031 00:23:32.315 Removing: /var/run/dpdk/spdk_pid407337 00:23:32.315 Removing: /var/run/dpdk/spdk_pid413440 00:23:32.315 Removing: /var/run/dpdk/spdk_pid414006 00:23:32.315 Removing: /var/run/dpdk/spdk_pid416134 00:23:32.315 Removing: /var/run/dpdk/spdk_pid419411 00:23:32.315 Clean 00:23:32.315 04:14:46 -- common/autotest_common.sh@1437 -- # return 0 00:23:32.315 04:14:46 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:23:32.315 04:14:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:32.315 04:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:32.315 04:14:46 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:23:32.315 04:14:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:32.315 04:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:32.575 04:14:46 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:23:32.575 04:14:46 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:23:32.575 04:14:46 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:23:32.575 04:14:46 -- spdk/autotest.sh@389 -- # hash lcov 00:23:32.575 04:14:46 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:32.575 04:14:46 -- spdk/autotest.sh@391 -- # hostname 00:23:32.575 04:14:46 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-37 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:23:32.575 geninfo: WARNING: invalid characters removed from testname! 00:23:50.680 04:15:03 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:51.619 04:15:05 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:52.999 04:15:07 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:54.906 04:15:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:56.287 04:15:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:57.667 04:15:11 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:59.049 04:15:13 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:59.049 04:15:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:59.049 04:15:13 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:59.049 04:15:13 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.049 04:15:13 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.049 04:15:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.049 04:15:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.049 04:15:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.049 04:15:13 -- paths/export.sh@5 -- $ export PATH 00:23:59.049 04:15:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.049 04:15:13 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:23:59.049 04:15:13 -- common/autobuild_common.sh@435 -- $ date +%s 00:23:59.049 04:15:13 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713492913.XXXXXX 00:23:59.049 04:15:13 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713492913.2omkSa 00:23:59.049 04:15:13 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:23:59.049 04:15:13 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:23:59.049 04:15:13 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:23:59.049 04:15:13 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:23:59.049 04:15:13 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:23:59.049 04:15:13 -- common/autobuild_common.sh@451 -- $ get_config_params 00:23:59.049 04:15:13 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:23:59.049 04:15:13 -- common/autotest_common.sh@10 -- $ set +x 00:23:59.049 04:15:13 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:23:59.049 04:15:13 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:23:59.049 04:15:13 -- pm/common@17 -- $ local monitor 00:23:59.049 04:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:59.049 04:15:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=435270 00:23:59.049 04:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:59.049 04:15:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=435272 00:23:59.049 04:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:59.049 04:15:13 -- pm/common@21 -- $ date +%s 00:23:59.049 04:15:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=435274 00:23:59.049 04:15:13 -- pm/common@21 -- $ date +%s 00:23:59.049 04:15:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:59.049 04:15:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=435277 00:23:59.049 04:15:13 -- pm/common@21 -- $ date +%s 00:23:59.049 04:15:13 -- pm/common@26 -- $ sleep 1 00:23:59.049 04:15:13 -- pm/common@21 -- $ date +%s 00:23:59.049 04:15:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713492913 00:23:59.049 04:15:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713492913 00:23:59.049 04:15:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713492913 00:23:59.049 04:15:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713492913 00:23:59.049 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713492913_collect-vmstat.pm.log 00:23:59.049 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713492913_collect-cpu-load.pm.log 00:23:59.049 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713492913_collect-cpu-temp.pm.log 00:23:59.049 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713492913_collect-bmc-pm.bmc.pm.log 00:23:59.991 04:15:14 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:23:59.991 04:15:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:23:59.991 04:15:14 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:59.991 04:15:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:23:59.991 04:15:14 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:23:59.991 04:15:14 -- spdk/autopackage.sh@19 -- $ timing_finish 00:23:59.991 04:15:14 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:59.991 04:15:14 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:23:59.991 04:15:14 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:23:59.991 04:15:14 -- spdk/autopackage.sh@20 -- $ exit 0 00:23:59.991 04:15:14 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:59.991 04:15:14 -- pm/common@30 -- $ signal_monitor_resources TERM 00:23:59.991 04:15:14 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:23:59.991 04:15:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:59.991 04:15:14 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:23:59.991 04:15:14 -- pm/common@45 -- $ pid=435284 00:23:59.991 04:15:14 -- pm/common@52 -- $ sudo kill -TERM 435284 00:24:00.251 04:15:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:00.251 04:15:14 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:24:00.251 04:15:14 -- pm/common@45 -- $ pid=435286 00:24:00.251 04:15:14 -- pm/common@52 -- $ sudo kill -TERM 435286 00:24:00.251 04:15:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:00.251 04:15:14 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:24:00.251 04:15:14 -- pm/common@45 -- $ pid=435287 00:24:00.251 04:15:14 -- pm/common@52 -- $ sudo kill -TERM 435287 00:24:00.251 04:15:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:00.251 04:15:14 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:24:00.251 04:15:14 -- pm/common@45 -- $ pid=435290 00:24:00.251 04:15:14 -- pm/common@52 -- $ sudo kill -TERM 435290 00:24:00.251 + [[ -n 5678 ]] 00:24:00.251 + sudo kill 5678 00:24:00.262 [Pipeline] } 00:24:00.279 [Pipeline] // stage 00:24:00.284 [Pipeline] } 00:24:00.302 [Pipeline] // timeout 00:24:00.307 [Pipeline] } 00:24:00.321 [Pipeline] // catchError 00:24:00.326 [Pipeline] } 00:24:00.340 [Pipeline] // wrap 00:24:00.345 [Pipeline] } 00:24:00.360 [Pipeline] // catchError 00:24:00.368 [Pipeline] stage 00:24:00.370 [Pipeline] { (Epilogue) 00:24:00.382 [Pipeline] catchError 00:24:00.384 [Pipeline] { 00:24:00.398 [Pipeline] echo 00:24:00.399 Cleanup processes 00:24:00.405 [Pipeline] sh 00:24:00.694 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:24:00.694 435381 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:24:00.694 435739 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:24:00.708 [Pipeline] sh 00:24:00.997 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:24:00.997 ++ grep -v 'sudo pgrep' 00:24:00.997 ++ awk '{print $1}' 00:24:00.997 + sudo kill -9 435381 00:24:01.009 [Pipeline] sh 00:24:01.339 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:08.103 [Pipeline] sh 00:24:08.407 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:08.407 Artifacts sizes are good 00:24:08.449 [Pipeline] archiveArtifacts 00:24:08.478 Archiving artifacts 00:24:09.056 [Pipeline] sh 00:24:09.338 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:24:09.351 [Pipeline] cleanWs 00:24:09.361 [WS-CLEANUP] Deleting project workspace... 00:24:09.361 [WS-CLEANUP] Deferred wipeout is used... 00:24:09.367 [WS-CLEANUP] done 00:24:09.369 [Pipeline] } 00:24:09.387 [Pipeline] // catchError 00:24:09.400 [Pipeline] sh 00:24:09.680 + logger -p user.info -t JENKINS-CI 00:24:09.688 [Pipeline] } 00:24:09.704 [Pipeline] // stage 00:24:09.709 [Pipeline] } 00:24:09.725 [Pipeline] // node 00:24:09.730 [Pipeline] End of Pipeline 00:24:09.766 Finished: SUCCESS